Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distinguish scraped from manually verified/corrected entries #77

Open
asiekierka opened this issue Jul 11, 2021 · 3 comments
Open

Distinguish scraped from manually verified/corrected entries #77

asiekierka opened this issue Jul 11, 2021 · 3 comments
Labels

Comments

@asiekierka
Copy link
Contributor

asiekierka commented Jul 11, 2021

In the event of an active platform which is being scraped multiple times, it's a good idea to know which entries have been manually adjusted or corrected, so that their metadata is not overridden with a flawed or incomplete version.

An ideal solution, IMO, would be for the raw, scraped data to be stored separately from manually corrected/user-provided data. From there, they could be merged every time an update is performed.

An alternate solution would be for the metadata to contain the source of the information, allowing skipping such unwanted overrides in this way.

@avivace
Copy link
Member

avivace commented Sep 20, 2021

In the event of an active platform which is being scraped multiple times, it's a good idea to know which entries have been manually adjusted or corrected, so that their metadata is not overridden with a flawed or incomplete version.

An ideal solution, IMO, would be for the raw, scraped data to be stored separately from manually corrected/user-provided data. From there, they could be merged every time an update is performed.

An alternate solution would be for the metadata to contain the source of the information, allowing skipping such unwanted overrides in this way.

This is a quite interesting issue. However, I honestly fail to see a feasible way to implement this without destroying the current pipelines in a major way.

E.g.:
We could add an "audit report" [1] property to the JSON schema reporting every action that has been taken on a entry, reporting which scraper generated it and how, so the generation process is reproducible. On the top of this "initial" step, one could add ones describing user interventions on those JSONs.

At this point I don't see how we can keep the JSONs human editable as they are now, though.

[1] Some of those concepts are described in the OAIS model specification - https://public.ccsds.org/pubs/650x0m2.pdf

@avivace avivace added the schema label Jan 27, 2022
@dag7dev
Copy link
Contributor

dag7dev commented Mar 19, 2024

can it be an interesting addition to add a "manual" or "verified" tag, so that manually reviewed entries could be tagged?

@asiekierka
Copy link
Contributor Author

From Discord:

I believe the appropriate solution is to have separate folders for scraping result JSONs, and then have a list of matches in the entries/ JSONs
so for example sources/demozoo/reflectendo/reflectendo.json, and then entries/reflectendo/reflectendo.json has "from": [ "demozoo/reflectendo" ]
the backend then ingests all JSONs from entries first, then all JSONs from sources/* which do not have a matching "from" entry
this way, you both make it easy to figure out which entries are manually sourced (they're in entries) and preserve differing metadata (say, if pdroms describes a game differently, or has unique screenshots, or etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants