Is a metadata record “almost” expressed in the same language you used for your filter criteria?
Why would one consider indexing validators? Reuse. The value of reuse seems obvious for structural and semantic specification, i.
Indexing identifiers is key to disambiguating entities. Wikipedia has disambiguation pages.
How do you validate a reified trace of digital-object provenance?
Given a representation of (meta)data that dcterms:conformsTo some data profile, you may wish to translate it to another data profile.
To validate is to compute, so indexing metadata for past validation events and caching any detailed payloads can save time and effort.
Given a fip:Metadata-schema and a validator for it, such as a sh:Validator or a JSON Schema, how do you determine that the validator is…valid?
What conveys that data has been validated or is yet to be validated?
At a base level, an identifier is simple to trace – it is the sequence (modulo concurrency) of assertions of which it is a part.
Good identifiers are opaque, so translation is by association – owl:sameAs, skos:exactMatch, or some other relationship.
Where do you look for identifiers? If you’re looking for a URI, the IANA has a registry of schemes, like https, mailto, and tel.
How do you validate that an identifier service provides global uniqueness of minted keys, persistence of bindings, and resolution of keys to descriptive metadata?
Day 1 of my five-week experiment to elaborate on FAIR-enabling services, and I already find myself fallen flat on my face.
Yesterday, I proposed that a strategy for implementing the FAIR principles for research data management can focus on ensuring five FAIR-enabling services, which in turn will prompt tactical choices of FAIR-enabling resources that may satisfactorily address each question and thereby produce a comprehensive implementation profile.