paperswithcode / paperswithcode-client

API Client for paperswithcode.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to access to the unlisted datasets in PWC?

zhimin-z opened this issue · comments

I discovered that the main dataset page mentions the availability of up to 9,753 machine-learning datasets:
image
However, upon navigating through the pages from page 1 to page 100, I found no way to access the datasets not listed within the first 100 pages. Even when I manually attempted to access pages beyond 100, the website returned the same dataset list as page 100.
image
Could you please advise if there is a method to retrieve datasets beyond the first 100 pages? Your assistance in this matter would be greatly appreciated. @alefnula @lambdaofgod @rstojnic @mkardas

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

Dear @rstojnic ,

I hope this message finds you well. After reading your comment, I wanted to reach out and clarify that the activity you've observed might potentially be related to my research efforts (but this is not 100% sure). I've been collecting dataset information for research on dataset evolution, which involves gathering data from various sources, including your platform. Here is my code:

from paperswithcode import PapersWithCodeClient

client = PapersWithCodeClient(token=XXXX)

page = 1
scrape = True
dataset_full = {}

while scrape:
    try:
        dataset_page = client.dataset_list(page=page)
        for dataset in dataset_page.results:
            dataset_full[dataset.id] = {
                'name': dataset.name,
                'url': dataset.url,
            }
    except:
        scrape = False
    page += 1

Please note that my intentions are purely academic, and I sincerely apologize for any unintended strain my actions may have placed on your website. I can assure you that I am not engaged in any malicious activity, such as DDOS-ing.

Would there be a more appropriate method for me to collect this dataset information for research purposes without causing any issues to your platform? Your guidance and support in this matter would be greatly appreciated.

Thank you for your understanding, and I look forward to hearing from you.

Best regards,
Jimmy

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

We temporarily disabled dataset browsing because someone was DDOS-ing the website using a bot. It looks like they are running a broken bot that's trying all kinds of nonsensical dataset filters, which is why we've disabled them for now. Should be back shortly after we fully identify and block them.

Dear @rstojnic ,

I hope this message finds you well. After reading your comment, I wanted to reach out and clarify that the activity you've observed might potentially be related to my research efforts. I've been collecting dataset information for research on dataset evolution, which involves gathering data from various sources, including your platform. Here is my code:

from paperswithcode import PapersWithCodeClient

client = PapersWithCodeClient(token=XXXX)

page = 1
scrape = True
dataset_full = {}

while scrape:
    try:
        dataset_page = client.dataset_list(page=page)
        for dataset in dataset_page.results:
            dataset_full[dataset.id] = {
                'name': dataset.name,
                'url': dataset.url,
            }
    except:
        scrape = False
    page += 1
    
with open(f'{path_meta}/dataset_full.pkl', 'wb') as f:
    pickle.dump(dataset_full, f) 

Please note that my intentions are purely academic, and I sincerely apologize for any unintended strain my actions may have placed on your website. I can assure you that I am not engaged in any malicious activity, such as DDOS-ing.

Would there be a more appropriate method for me to collect this dataset information for research purposes without causing any issues to your platform? Your guidance and support in this matter would be greatly appreciated.

Thank you for your understanding, and I look forward to hearing from you.

Best regards, Jimmy

I indeed wrote up an email clarifying this a few days ago, but there is no reply yet so I just collect them using this API for a chance.

Hi @zhimin-z there is no need to scrape the website, all the data is available on: https://github.com/paperswithcode/paperswithcode-data

Hi @zhimin-z there is no need to scrape the website, all the data is available on: https://github.com/paperswithcode/paperswithcode-data

Thank you for sharing the GitHub repository containing the Papers with Code data. I truly appreciate your guidance and support in accessing the dataset information for my research on the evolution of PWC datasets.

While exploring the repository, I noticed that it was last updated three years ago. I am particularly interested in the latest dataset information, as my research primarily focuses on the evolution and current state of PWC datasets. Access to the most up-to-date data is crucial for the accuracy and relevance of my work.

Would you be able to confirm if the dataset information in the GitHub repository is the most recent available, or is there another source I should refer to for the latest data? Your assistance in this matter is invaluable to the success of my research.

Thank you once again for your help, and I look forward to your response.

Best regards,
Jimmy

The repo itself is old because it's just a README. The links point back to our S3 bucket that should be updated every day.

The repo itself is old because it's just a README. The links point back to our S3 bucket that should be updated every day.

Hmm.. I found some datasets are not available in the downloadable json files. For example, HELM and HEIM are not in the Datasets. That is the reason why I thought these files might be obsolete initially. I just wonder what are the criteria for your generating the Datasets and other Evaluation tables files?

They should all be there. If they are not, the export might be stuck. @alefnula @andrewkuanop

They should all be there. If they are not, the export might be stuck. @alefnula @andrewkuanop

Also, the community evaluation tables, such as https://paperswithcode.com/sota/text-classification-on-glue and https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum are not available in the Evaluation tables as well. Is that possible to download the community evaluation tables using any source? Or any suggested time interval if I want to scrape on my own?

There seem to be a lot of evaluation tables missing in the Evaluation tables from the website.
Here is Evaluation tables gives (9238 datasets in total):
image
Here is what I collected (within 100 displayable pages of the PWC datasets, 4800 datasets in total):
image

Overall, at least ten thousand level records are missing from your online archive, and this does not even take into account the evaluations from the datasets beyond 100 pages from the PWC website. @rstojnic @alefnula @andrewkuanop

https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum

Thanks for your reply, @andrewkuanop

For evaluation tables, I found https://paperswithcode.com/sota/text-classification-on-glue is available in the Evaluation tables, but https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum is still not.

For datasets, I found both HELM and HEIM are not in the Datasets.

I think the issue still persists...

https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum

Thanks, @andrewkuanop

After checking, I found the dataset issue is solved. Both HELM and HEIM are in the Datasets now.

However, I found https://paperswithcode.com/sota/text-classification-on-glue is available in the Evaluation tables, but https://paperswithcode.com/sota/abstractive-dialogue-summarization-on-samsum is still not.

I think the issue still persists for evaluation tables.