• 31337
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Interesting to see the biases of different models. 31337 Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 713069833, Size: 1024x1024, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, VAE hash: 63aeecb90f, VAE: sdxl_vae.safetensors, Refiner: sd_xl_refiner_1.0 [7440042bbd], Refiner switch at: 0.8, Version: v1.6.0

    Bing Image Creator

    • ThelsimM
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Midjourney seems to associate it with landscapes:
      31337

    • Krank Star@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      1000011899

      31337 Steps: 20, Sampler: DPM++ 3M SDE Karras, CFG scale: 7, Seed: 3244847912, Size: 768x1280, Model hash: 74dda471cc, Model: realvisxlV20_v20Bakedvae, Clip skip: 2, RNG: CPU, Version: v1.6.0

      • 31337
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Hmm. Looks like SD associates 31337 with people holding fruits or vegetables. Perhaps there were some images in its training set with 31337 in their filename.

        Bing image creator associates it with cyberpunk/vaporwave aesthetics, which is more correct. Bing appears to have a better and larger language model behind it that allows for better associations, and probably a larger and cleaner training dataset.

        Just skimmed through the Dalle-3 paper, and yeah, it’s probably the result of the better training data that was generated by GPT-V recaptioning images.