AppsusUK / NFT-Art-Generator

Easy to use NFT art generator app for windows/linux/mac

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Master MetaData File & Rarity File

DRM-Scripts opened this issue · comments

Pushing the boat out here but know how handy it would be for people (especially newcomers).

Can you create a metadata file with all the nft json file data in?

And also the possibility of a rarity json file that uses possibly the master meta file to give the % etc....

I won't be cheeky and suggest it creates a rank for each in forms of rarity based on the traits lol

Good work on the app and tackling suggestions so far 👍

Was just about to create a request on something similar.

Suggestions above would be great 👍

commented

I whipped up a PoC for this change in PR #9 .
The generated artwork.json is great for use with projects like scaffold-eth and others.
Hope this helps!

I put something together that creates a master metadata file and exports it as a CSV.
First, I created something that removed some properties I did not want in my metadata, which include: ("fee_recipient", "seller_fee_basis_points", "external_url", "hash"). Then I created something that exports all to a CSV file.

I then take the contents of that CSV and paste it into a google sheet I created that gives me percentages and the number of times a trait appears in my collection.

This is my "editMeta.py" code that should be placed in the same directory as the "appsus-nft-art-generator.0.0.6.exe" file.

import os
import json

# Total collection generated
dir_path = os.path.dirname(os.path.realpath(__file__))
md_path = dir_path + "\output\metadata"
dirCount = next(os.walk(md_path))[2]

total = len(dirCount)

# Metadata directory variable
metaDir = ".\output\metadata"

# Change to metadata directory
os.chdir(metaDir)

# Keys to be removed
keyToDelete = ["fee_recipient", "seller_fee_basis_points", "external_url", "hash"]

# Remove key function
def removeKey(file):
    for x in range(len(keyToDelete)):

        with open(file) as jf:
            jsonFile = json.load(jf)

        print('Length of JSON object before cleaning: ', len(jsonFile.keys()))

        testJson = {}
        keyList = jsonFile.keys()
        for key in keyList:
            if not key.startswith(keyToDelete[x]):
                print(key)
                testJson[key] = jsonFile[key]
        print('Length of JSON object after cleaning: ', len(testJson.keys()))

        with open(file, 'w') as jf:
            json.dump(testJson, jf)

# Remove keys for full colleciton
for x in range(1, total+1):
    removeKey(str(x) + ".json")

print(str(len(dirCount))+" json files found.")
print("Removed keys from "+str(len(dirCount))+" json files.")

This is my "masterMeta.py" file that grabs the attributes only of each json metadata file and exports to CSV. Please note you will need to change the "csv_columns = " to work with your project. Again, should be placed in the same directory as the "appsus-nft-art-generator.0.0.6.exe" file.

import os
import json
import csv


json_filename = ".json"

dir_path = os.path.dirname(os.path.realpath(__file__))
md_path = dir_path + "\output\metadata"
dirCount = next(os.walk(md_path))[2]
total = len(dirCount)


# Edit this for CSV Columns
csv_columns = ['NFT Number', 'Background', 'Sky Texture', 'Celestial', 'Hills', 'Clouds', 'Ocean', 'Land', 'Foliage', 'Foreground']
att_array = []
# Pull Traits function
def pullTraits(filename):

    with open(md_path+'\\'+filename, 'r') as jf:
        jsonFile = jf.read()

    m_data = json.loads(jsonFile)

    trait_type = ['NFT Number']
    trait_value = [os.path.splitext(filename)[0]]

    for key, value in m_data.items():
        if key.startswith("attributes"):
            attValue = value

    for attValues in attValue:
        trait_type.append(attValues['trait_type'])
        trait_value.append(attValues['value'])

    class attributes(dict):
        def __init__(self):
            self = dict()
        def add(self, key, value):
            self[key] = value

    attDict = attributes()

    for x in range(len(trait_type)):
        attDict.keys = trait_type[x]
        attDict.values = trait_value[x]
        attDict.add(attDict.keys, attDict.values)

    att_array.append(attDict)

    print(att_array)

# Create CSV File
def masterFile():
    # Opening JSON file
    f = open('data.json')
    
    # a dictionary
    data = json.load(f)
    
    csv_file = "masterFile.csv"
    try:
        with open(csv_file, 'w', newline='') as csvfile:
            writer = csv.DictWriter(csvfile, fieldnames=csv_columns)
            writer.writeheader()
            for data in data:
                writer.writerow(data)
    except IOError:
        print("I/O error")


# Function Call
for x in range(1, total+1):
    pullTraits(str(x)+json_filename)
    jsonString = json.dumps(att_array)
    jsonFile = open("data.json", "w")
    jsonFile.write(jsonString)
    jsonFile.close()

masterFile()

Finally, I exported the contents of that CSV file to this google sheet I created that checks for duplicates, tells me the rarity percentage and the number of times each trait occurs in the collection.
You will need to change things to work with your project.
I followed this to replace any empty cells with "None" so that i can see how many times an NFT did not have that trait.
https://www.statology.org/google-sheets-replace-blank-cells-with-zero/

Here is the google sheet: https://docs.google.com/spreadsheets/d/1-e2wk0JecKeMSZT7irD7I1-HhSdXCEXwagXG4vljxhE/edit?usp=sharing
Copy and change to work with your project.

If you'd like to donate for my efforts: rs7677.eth