Re: #correspondencetables : from raw to triplets #correspondencetables

 

Thanks Miguel-

It seems clear to me that the first step should be defining the verbs we will use, and the reasons we are using these particular verbs. For example, both OWL and SKOS seem to offer similar functionality, but I am sure that some people have strong opinions on which one is preferable. We also need to set up the metadata (i.e. RDF URIs) for level of confidence we have in the matchings, either official, manual and peer reviewed, computer generated, etc.

After looking through the repo and the code Miguel posted, I think we should investigate going directly from the raw data to RDF. The intermediate step doesn't really gain us anything, and it seems a bit silly not to use the power of our RDF database when constructing these correspondences. For example, if ISIC v4 disaggregated the production of some commodities from v3, then we should be storing the region-specific production of these commodities in our database, and using these numbers to do region-specific matches. We can always construct correspondence tables from the database relatively easy afterwards.

I also think we need better vocabulary then "sameAs" when storing the label, code, and other code (because why not) from certain classification systems. Maybe we can adapt existing terms for adding more specificity.

Given this need for some fundamental research, one possible priority for the group would be to get as many metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC). The README should also be updated to reflect the data available, and current state of the repo, especially WRT to the existing correspondence tables already available in the native form (in `raw`).

Join hackathon2019@bonsai.groups.io to automatically receive all group messages.