Topics

#correspondencetables : from raw to triplets #correspondencetables

Miguel Fernández Astudillo
 

Interesting, I will have a deeper look when possible.

I was updating the group readme. Should I move the references to the Hackathon somewhere else? it seems that this repo will survive and it will have a function in the workflow.

Miguel

-----Original Message-----
From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Chris Mutel
Sent: 09 April 2019 13:44
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #correspondencetables : from raw to triplets

As we are not the only people thinking about these topics, there has already been a lot of work in this area. It is relatively easy to find some half-baked implementations in RDF, e.g. datahub.io, joinedupdata.org, and the unstats web page Miguel linked is great.
However, the best resource I have found is here:
http://semstats.org/2016/challenge/classifications, with the actual data available here:
http://semstats.org/2016/challenge/challenge-data. The repo to generate these correspondences is https://github.com/FranckCo/Stamina,
with documentation here:
https://github.com/FranckCo/Stamina/blob/master/doc/content.md.

This data was produced by a project whose website is currently down (stamina-project.org); the easiest alternative would be to work with the original creator, but it doesn't look like he is responding to issues (I am also writing the creator). There are a few other things to clean up in this data, see e.g.
https://github.com/FranckCo/Stamina/issues/11 (and others).

Not sure about the next steps, except that I don't think we can create a better wheel than professionals already have. Maybe we can polish their wheel a bit, and use it?

On Mon, 8 Apr 2019 at 17:26, <miguel.astudillo@...> wrote:

Hello hello

lets see if I am getting this right.

Chris, when you say "put metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC)" does that mean "as downloaded?" are we talking about the "list of possible names" (e.g. stuff under "codes and descriptions) https://unstats.un.org/unsd/classifications/business-trade/correspondence.asp#correspondence-head (e.g. "ISIC_Rev_4_english_structure.txt") If so I would put only the needed ones. I dont think we need "HS1988"

would it be to create the URIs? e.g.
<http://rdf.bonsai.uno/activitytype/isic_v4section/>:Manufacturing a
bont:ActivityType

and later move to the "official" (=ready to use?) correspondance tables specifying predicates?

To make use of the existing correspondance tables I think we would need "exiobase2 to exiobase3" otherwise they are completely disconnected to the (core?) of the database.

best, Miguel

PS: I think a getting started guide urges, I am getting lost already!


--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

 

As we are not the only people thinking about these topics, there has
already been a lot of work in this area. It is relatively easy to find
some half-baked implementations in RDF, e.g. datahub.io,
joinedupdata.org, and the unstats web page Miguel linked is great.
However, the best resource I have found is here:
http://semstats.org/2016/challenge/classifications, with the actual
data available here:
http://semstats.org/2016/challenge/challenge-data. The repo to
generate these correspondences is https://github.com/FranckCo/Stamina,
with documentation here:
https://github.com/FranckCo/Stamina/blob/master/doc/content.md.

This data was produced by a project whose website is currently down
(stamina-project.org); the easiest alternative would be to work with
the original creator, but it doesn't look like he is responding to
issues (I am also writing the creator). There are a few other things
to clean up in this data, see e.g.
https://github.com/FranckCo/Stamina/issues/11 (and others).

Not sure about the next steps, except that I don't think we can create
a better wheel than professionals already have. Maybe we can polish
their wheel a bit, and use it?

On Mon, 8 Apr 2019 at 17:26, <miguel.astudillo@...> wrote:

Hello hello

lets see if I am getting this right.

Chris, when you say "put metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC)" does that mean "as downloaded?" are we talking about the "list of possible names" (e.g. stuff under "codes and descriptions) https://unstats.un.org/unsd/classifications/business-trade/correspondence.asp#correspondence-head (e.g. "ISIC_Rev_4_english_structure.txt") If so I would put only the needed ones. I dont think we need "HS1988"

would it be to create the URIs? e.g. <http://rdf.bonsai.uno/activitytype/isic_v4section/>:Manufacturing a bont:ActivityType

and later move to the "official" (=ready to use?) correspondance tables specifying predicates?

To make use of the existing correspondance tables I think we would need "exiobase2 to exiobase3" otherwise they are completely disconnected to the (core?) of the database.

best, Miguel

PS: I think a getting started guide urges, I am getting lost already!
--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Miguel Fernández Astudillo
 

Hello hello

lets see if I am getting this right.

Chris, when you say "put metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC)" does that mean "as downloaded?" are we talking about the "list of possible names" (e.g. stuff under "codes and descriptions) https://unstats.un.org/unsd/classifications/business-trade/correspondence.asp#correspondence-head (e.g. "ISIC_Rev_4_english_structure.txt") If so I would put only the needed ones. I dont think we need "HS1988"

would it be to create the URIs? e.g. <http://rdf.bonsai.uno/activitytype/isic_v4section/>:Manufacturing a bont:ActivityType

and later move to the "official" (=ready to use?) correspondance tables specifying predicates?

To make use of the existing correspondance tables I think we would need "exiobase2 to exiobase3" otherwise they are completely disconnected to the (core?) of the database.

best, Miguel

PS: I think a getting started guide urges, I am getting lost already!

Matteo Lissandrini (AAU)
 

In this case your example seems fine, you can still say that fbcl is subclass of POWC probably.
you can also say sameAs between POWN and Nuclear, assuming that the only way of producing electricity is by nuclear fission (in contrast to fusion?)

rdf:type doesn't apply when matching different activity types.

________________________________________
From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Chris Mutel via Groups.Io [cmutel=gmail.com@groups.io]
Sent: Monday, April 08, 2019 4:08 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #correspondencetables : from raw to triplets

Thanks Matteo-

It is a bit tricky keeping the class definitions and instances inline
with the idea of `rdf:type` referring to multiple classes - could you
provide an alternative implementation of the example?

BTW, "Nuclear" is the label ENTSO-E uses in its API, short for
"production of electricity using nuclear fission".

On Mon, 8 Apr 2019 at 15:57, Matteo Lissandrini (AAU) <matteo@...> wrote:

Hi Chris,
have you checked the very useful examples here:
https://www.w3.org/2006/07/SWD/SKOS/skos-and-owl/master.html

In general let's use subsclass of and rdf:type when we know it is a subset or an instance of, and let's use skos for "fuzzy" concepts.

ActivityType are classes, so you can say that something is a subclass of a specific activity type.

I'm not sure what should be just "Nuclear" in your model.

About automatic tools, usually they introduce uncertainty, but above all, they require an initial ground truth, otherwise we cannot understand if they are doing what we want them to do.

We do not have a first full version of the BONSAI data and system, trying to address automatic data cleaning &co. is more likely to introduce noise and slow down the project.
So I would say, let's get done with a MVP (minimum viable product) with some manual work that assures the highest quality and control (we can limit to just a portion of the tables).
Later on I will be happy to help you investigate more automatic tools, but I would say to do this when we will be able to compare to something we know to be right.


Cheers,
Matteo

---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo











________________________________
From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Chris Mutel via Groups.Io [cmutel=gmail.com@groups.io]
Sent: Monday, April 08, 2019 2:03 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #correspondencetables : from raw to triplets

@Matteo, Bo, Miguel; please comment and correct!

Defining correspondence tables in RDF

Based on my reading of https://www.w3.org/TR/skos-reference/, I created the following:

@prefix bont: <http://ontology.bonsai.uno/core#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .

<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> a bont:ActivityType ;
skos:prefLabel "Production of electricity by coal" .
skos:altLabel "A_POWC" .
skos:narrowMatch <http://rdf.bonsai.uno/activitytype/entsoe/fbcl> .

<http://rdf.bonsai.uno/activitytype/entsoe/fbcl> a bont:ActivityType ;
skos:prefLabel "Fossil Brown coal/Lignite" .
skos:broadMatch <http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> .

<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWN> a bont:ActivityType ;
skos:prefLabel "Production of electricity by nuclear" .
skos:altLabel "A_POWN" .
skos:exactMatch <http://rdf.bonsai.uno/activitytype/entsoe/nuke> .

<http://rdf.bonsai.uno/activitytype/entsoe/nuke> a bont:ActivityType ;
skos:prefLabel "Nuclear" .

This has been very helpful for me, as it has helped build a mental model of how to express hierarchical relations, codes, etc. For sure, I have made mistakes though!

Outstanding questions:

1. It is unclear to me whether or not `narrowMatch` and `broadMatch` are transitive.
2. Do we need to declare `narrowMatch` and `broadMatch`?
3. Can we drop `rdfs:label` completely in favor of `skos:prefLabel`?
4. Do we agree on using `skos:altLabel` for codes?
5. Partial overlaps, as mentioned by Bo. There are possibilities to describe this in SKOS, but I don't know what approach is best.

Next steps for correspondence tables repo

I still think that the first step should be getting all the basic data (labels, codes, and URIs) into arborist, followed by the official correspondence lists using the above format. The example that Miguel posted should never be needed (A -> C, when we knew A -> B and B -> C), as we should be able to get this transitive relationship "automatically" though SPARQL queries (and we need to learn how to write these queries in any case).

We can then proceed with our own self-generated correspondences; there are a number of libraries to help with this besides fuzzywuzzy (though it does have the best name :)

https://recordlinkage.readthedocs.io/en/latest/about.html
https://github.com/dedupeio/dedupe
https://github.com/kvh/match
https://pypi.org/project/py_entitymatching/


Some research and trial phases would be necessary before picking any particular approach.




--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

 

Thanks Matteo-

It is a bit tricky keeping the class definitions and instances inline
with the idea of `rdf:type` referring to multiple classes - could you
provide an alternative implementation of the example?

BTW, "Nuclear" is the label ENTSO-E uses in its API, short for
"production of electricity using nuclear fission".

On Mon, 8 Apr 2019 at 15:57, Matteo Lissandrini (AAU) <matteo@...> wrote:

Hi Chris,
have you checked the very useful examples here:
https://www.w3.org/2006/07/SWD/SKOS/skos-and-owl/master.html

In general let's use subsclass of and rdf:type when we know it is a subset or an instance of, and let's use skos for "fuzzy" concepts.

ActivityType are classes, so you can say that something is a subclass of a specific activity type.

I'm not sure what should be just "Nuclear" in your model.

About automatic tools, usually they introduce uncertainty, but above all, they require an initial ground truth, otherwise we cannot understand if they are doing what we want them to do.

We do not have a first full version of the BONSAI data and system, trying to address automatic data cleaning &co. is more likely to introduce noise and slow down the project.
So I would say, let's get done with a MVP (minimum viable product) with some manual work that assures the highest quality and control (we can limit to just a portion of the tables).
Later on I will be happy to help you investigate more automatic tools, but I would say to do this when we will be able to compare to something we know to be right.


Cheers,
Matteo

---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo











________________________________
From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Chris Mutel via Groups.Io [cmutel=gmail.com@groups.io]
Sent: Monday, April 08, 2019 2:03 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #correspondencetables : from raw to triplets

@Matteo, Bo, Miguel; please comment and correct!

Defining correspondence tables in RDF

Based on my reading of https://www.w3.org/TR/skos-reference/, I created the following:

@prefix bont: <http://ontology.bonsai.uno/core#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .

<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> a bont:ActivityType ;
skos:prefLabel "Production of electricity by coal" .
skos:altLabel "A_POWC" .
skos:narrowMatch <http://rdf.bonsai.uno/activitytype/entsoe/fbcl> .

<http://rdf.bonsai.uno/activitytype/entsoe/fbcl> a bont:ActivityType ;
skos:prefLabel "Fossil Brown coal/Lignite" .
skos:broadMatch <http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> .

<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWN> a bont:ActivityType ;
skos:prefLabel "Production of electricity by nuclear" .
skos:altLabel "A_POWN" .
skos:exactMatch <http://rdf.bonsai.uno/activitytype/entsoe/nuke> .

<http://rdf.bonsai.uno/activitytype/entsoe/nuke> a bont:ActivityType ;
skos:prefLabel "Nuclear" .

This has been very helpful for me, as it has helped build a mental model of how to express hierarchical relations, codes, etc. For sure, I have made mistakes though!

Outstanding questions:

1. It is unclear to me whether or not `narrowMatch` and `broadMatch` are transitive.
2. Do we need to declare `narrowMatch` and `broadMatch`?
3. Can we drop `rdfs:label` completely in favor of `skos:prefLabel`?
4. Do we agree on using `skos:altLabel` for codes?
5. Partial overlaps, as mentioned by Bo. There are possibilities to describe this in SKOS, but I don't know what approach is best.

Next steps for correspondence tables repo

I still think that the first step should be getting all the basic data (labels, codes, and URIs) into arborist, followed by the official correspondence lists using the above format. The example that Miguel posted should never be needed (A -> C, when we knew A -> B and B -> C), as we should be able to get this transitive relationship "automatically" though SPARQL queries (and we need to learn how to write these queries in any case).

We can then proceed with our own self-generated correspondences; there are a number of libraries to help with this besides fuzzywuzzy (though it does have the best name :)

https://recordlinkage.readthedocs.io/en/latest/about.html
https://github.com/dedupeio/dedupe
https://github.com/kvh/match
https://pypi.org/project/py_entitymatching/


Some research and trial phases would be necessary before picking any particular approach.


--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Matteo Lissandrini (AAU)
 

Hi Chris,
have you checked the very useful examples here:

In general let's use subsclass of and rdf:type when we know it is a subset or an instance of, and let's use skos for "fuzzy" concepts.

ActivityType are classes, so you can say that something is a subclass of a specific activity type.

I'm not sure what should be just "Nuclear" in your model.

About automatic tools, usually they introduce uncertainty, but above all, they require an initial ground truth, otherwise we cannot understand if they are doing what we want them to do.

We do not have a first full version of the BONSAI data and system, trying to address automatic data cleaning &co. is more likely to introduce noise and slow down the project.
So I would say, let's get done with a MVP (minimum viable product) with some manual work that assures the highest quality and control (we can limit to just a portion of the tables).
Later on I will be happy to help you investigate more automatic tools, but I would say to do this when we will be able to compare to something we know to be right.


Cheers,
Matteo

---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo












From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Chris Mutel via Groups.Io [cmutel@...]
Sent: Monday, April 08, 2019 2:03 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #correspondencetables : from raw to triplets

@Matteo, Bo, Miguel; please comment and correct!

Defining correspondence tables in RDF

Based on my reading of https://www.w3.org/TR/skos-reference/, I created the following:

@prefix bont: <http://ontology.bonsai.uno/core#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
 
<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> a bont:ActivityType ;
    skos:prefLabel "Production of electricity by coal" .
    skos:altLabel "A_POWC" .
    skos:narrowMatch <http://rdf.bonsai.uno/activitytype/entsoe/fbcl> .
 
<http://rdf.bonsai.uno/activitytype/entsoe/fbcl> a bont:ActivityType ;
    skos:prefLabel "Fossil Brown coal/Lignite" .
    skos:broadMatch <http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> .
 
<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWN> a bont:ActivityType ;
    skos:prefLabel "Production of electricity by nuclear" .
    skos:altLabel "A_POWN" .
    skos:exactMatch <http://rdf.bonsai.uno/activitytype/entsoe/nuke> .
 
<http://rdf.bonsai.uno/activitytype/entsoe/nuke> a bont:ActivityType ;
    skos:prefLabel "Nuclear" .

This has been very helpful for me, as it has helped build a mental model of how to express hierarchical relations, codes, etc. For sure, I have made mistakes though!

Outstanding questions:

1. It is unclear to me whether or not `narrowMatch` and `broadMatch` are transitive.
2. Do we need to declare `narrowMatch` and `broadMatch`?
3. Can we drop `rdfs:label` completely in favor of `skos:prefLabel`?
4. Do we agree on using `skos:altLabel` for codes?
5. Partial overlaps, as mentioned by Bo. There are possibilities to describe this in SKOS, but I don't know what approach is best.

Next steps for correspondence tables repo

I still think that the first step should be getting all the basic data (labels, codes, and URIs) into arborist, followed by the official correspondence lists using the above format. The example that Miguel posted should never be needed (A -> C, when we knew A -> B and B -> C), as we should be able to get this transitive relationship "automatically" though SPARQL queries (and we need to learn how to write these queries in any case).

We can then proceed with our own self-generated correspondences; there are a number of libraries to help with this besides fuzzywuzzy (though it does have the best name :)


Some research and trial phases would be necessary before picking any particular approach.


 

@Matteo, Bo, Miguel; please comment and correct!

Defining correspondence tables in RDF

Based on my reading of https://www.w3.org/TR/skos-reference/, I created the following:

@prefix bont: <http://ontology.bonsai.uno/core#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
 
<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> a bont:ActivityType ;
    skos:prefLabel "Production of electricity by coal" .
    skos:altLabel "A_POWC" .
    skos:narrowMatch <http://rdf.bonsai.uno/activitytype/entsoe/fbcl> .
 
<http://rdf.bonsai.uno/activitytype/entsoe/fbcl> a bont:ActivityType ;
    skos:prefLabel "Fossil Brown coal/Lignite" .
    skos:broadMatch <http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWC> .
 
<http://rdf.bonsai.uno/activitytype/exiobase3_3_17/A_POWN> a bont:ActivityType ;
    skos:prefLabel "Production of electricity by nuclear" .
    skos:altLabel "A_POWN" .
    skos:exactMatch <http://rdf.bonsai.uno/activitytype/entsoe/nuke> .
 
<http://rdf.bonsai.uno/activitytype/entsoe/nuke> a bont:ActivityType ;
    skos:prefLabel "Nuclear" .

This has been very helpful for me, as it has helped build a mental model of how to express hierarchical relations, codes, etc. For sure, I have made mistakes though!

Outstanding questions:

1. It is unclear to me whether or not `narrowMatch` and `broadMatch` are transitive.
2. Do we need to declare `narrowMatch` and `broadMatch`?
3. Can we drop `rdfs:label` completely in favor of `skos:prefLabel`?
4. Do we agree on using `skos:altLabel` for codes?
5. Partial overlaps, as mentioned by Bo. There are possibilities to describe this in SKOS, but I don't know what approach is best.

Next steps for correspondence tables repo

I still think that the first step should be getting all the basic data (labels, codes, and URIs) into arborist, followed by the official correspondence lists using the above format. The example that Miguel posted should never be needed (A -> C, when we knew A -> B and B -> C), as we should be able to get this transitive relationship "automatically" though SPARQL queries (and we need to learn how to write these queries in any case).

We can then proceed with our own self-generated correspondences; there are a number of libraries to help with this besides fuzzywuzzy (though it does have the best name :)


Some research and trial phases would be necessary before picking any particular approach.


Bo Weidema
 

I agree with Chris here.

First, most classifications contain themselves hierarchies, typically indicated by some code convention, such as ISIC 4 "011" being a subclass of "01" and so on. These should be related by the appropriate RDF predicate.

Secondly, in each classification, each class typically have one or more human readable label and one or more codes. These should be related by relevant "main code", "alternative code", "main label", "alternative label" RDF objects.

When matching two classifications, we have either of three relations (exact match, fully contained in, or partly contained in), that need to be expressed. Example:

Original I Original II Relations Evolving BONSAI classification
1 A Exact match Preferred name of either 1 or A, if different
2 B B fully contained in 2 (= B is sub-class of 2) B, AND implicitly "2lessB" also exists, also being sub-class of 2.
3 C C partly contained in 3 (= CpartOf3 is sub-class of 3) CpartOf3, AND implicitly 3less'CpartOf3' also exists, also being sub-class of 3.

In case both original classifications are (expected to be) exhaustive of the same domain, we can deduce the relations (exact match, fully contained in, or partly contained in) from the existence (or not) of more cases of classes 2, B, 3 and C. We can also deduce that 'Original II" will contain one or more classes corresponding to each of 2lessB, 3less'CpartOf3', and Cless'CpartOf3, so that all items within the domain will belong to one specific class in all classifications.

The resulting structure should be a triple for each of the relations between each classification.

What cannot be done automatically is: 1) The choice of the preferred name of either 1 or A, if different, and 2) Improvements in human readability of new auto-generated labels

 Best regards

Bo

Den 2019-04-05 kl. 20.09 skrev Chris Mutel:

Thanks Miguel-

It seems clear to me that the first step should be defining the verbs we will use, and the reasons we are using these particular verbs. For example, both OWL and SKOS seem to offer similar functionality, but I am sure that some people have strong opinions on which one is preferable. We also need to set up the metadata (i.e. RDF URIs) for level of confidence we have in the matchings, either official, manual and peer reviewed, computer generated, etc.

After looking through the repo and the code Miguel posted, I think we should investigate going directly from the raw data to RDF. The intermediate step doesn't really gain us anything, and it seems a bit silly not to use the power of our RDF database when constructing these correspondences. For example, if ISIC v4 disaggregated the production of some commodities from v3, then we should be storing the region-specific production of these commodities in our database, and using these numbers to do region-specific matches. We can always construct correspondence tables from the database relatively easy afterwards.

I also think we need better vocabulary then "sameAs" when storing the label, code, and other code (because why not) from certain classification systems. Maybe we can adapt existing terms for adding more specificity.

Given this need for some fundamental research, one possible priority for the group would be to get as many metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC). The README should also be updated to reflect the data available, and current state of the repo, especially WRT to the existing correspondence tables already available in the native form (in `raw`).
--

 

Thanks Miguel-

It seems clear to me that the first step should be defining the verbs we will use, and the reasons we are using these particular verbs. For example, both OWL and SKOS seem to offer similar functionality, but I am sure that some people have strong opinions on which one is preferable. We also need to set up the metadata (i.e. RDF URIs) for level of confidence we have in the matchings, either official, manual and peer reviewed, computer generated, etc.

After looking through the repo and the code Miguel posted, I think we should investigate going directly from the raw data to RDF. The intermediate step doesn't really gain us anything, and it seems a bit silly not to use the power of our RDF database when constructing these correspondences. For example, if ISIC v4 disaggregated the production of some commodities from v3, then we should be storing the region-specific production of these commodities in our database, and using these numbers to do region-specific matches. We can always construct correspondence tables from the database relatively easy afterwards.

I also think we need better vocabulary then "sameAs" when storing the label, code, and other code (because why not) from certain classification systems. Maybe we can adapt existing terms for adding more specificity.

Given this need for some fundamental research, one possible priority for the group would be to get as many metadata systems in their native form into arborist (e.g. ISIC 3, ISIC 4, HS1, NACE, NAICS, CPC). The README should also be updated to reflect the data available, and current state of the repo, especially WRT to the existing correspondence tables already available in the native form (in `raw`).

romain
 

In the future, should manual string matching be too time-consuming, you may consider to get some help from fuzzy string comparison (https://github.com/seatgeek/fuzzywuzzy)

/Romain

Miguel Fernández Astudillo
 

Dear all

 

During the Friday meeting we discussed the issue of how to arrive from “raw data” to the data stored in the database. There was some disagreement and I think I did not explain myself clearly so I will try to do it in this email.

 

As I see it raw data will come in a variety of formats, some will be csv, other xlsx or txt. Some will be existing correspondence tables that we use as intermediate steps to build a different one. Or we may create other correspondence tables “joining” two different classifications. The raw data should be stored when possible, to avoid breaking the system when data is no longer available or slightly modified.

 

I see it, it all starts with the dirty job of data cleaning. Data cleaning should be scripted so it can be reproduced easily, avoiding any manual steps. But it can hardly be generalised and it will be very specific to the tables being created. It will also need to be adapted because data providers will change the way the output their data. This process of data cleaning should arrive to a csv* that can be more easily “digested” by other functions To e.g. add a predicate or a weighting factor. This fits with recommendations of reproducibility (https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285). “record intermediate results in standarddized  formats”. From the cleaned data we can create a table with subject-object-predicate and maybe some weighting (all with a descriptor of the metadata). This curated info should be (in my opinion) what is “consumed” by arborist (see issue #4).

 

Here an example of (trying) to create two different correspondence tables, just to illustrate how difference can be one to the other.

 

https://github.com/BONSAMURAIS/Correspondence-tables/blob/master/scripts/from_raw_to_clean_tables.ipynb

 

Enjoy the weekend!

 

Miguel

 

*For the three different ways of calling the same activities/flows in Exiobase. I think they should be 3 tables with “same as” (?) predicate.