Date   
Re: Adding provenance #ontology #intro #provenance

Bo Weidema
 

Dear Agneta,

First, it is important to distinguish between:

1) What is "raw data" in a BONSAI context, namely the data as they are received from elsewhere. These data may be either direct measurements (very rarely) or previously more or less processed (in the case of Exiobase definitively more so), with or without explicit previous provenance. For these data, it is obviously sufficient to report the direct source, as it is received (example: Exiobase version NNh, downloaded from URL at Time) which is then applicable to all datapoints within that dataset.

2) Data that are corrected or otherwise manipulated after receipt, in which case it is relevant to add the nature of the correction or calculation, and a timestamp for the changed dataset (but not for the parts unchanged). In this way, one can always trace the origin of any datum to the form it originally was provided to BONSAI.

As ambitions and resources increase, someone may later want to add further upstream provenance to the data in BONSAI, which is of course always possible and desirable.

Best regards

Bo

Den 2019-11-13 kl. 14.42 skrev Agneta:

Thanks for the document Bo

The document recommends timestamping of the datapoints and query outputs. Although I am unsure what degree will we be able to add provenance to each value on Exiobase. Although Exiobase does use data from multiple sources it adds some algorithms to provide a balanced dataset. In other words, its a secondary dataset (primary datasets are those which contain raw data)

If in future some values are changed, this leads to the publication of a new version of the dataset. So the provenance for all values in exiobase is generated as exiobase + (specific version).
My question is do we need to have provenance of individual values in a secondary dataset. Its different when we have minute by minute information of temperature change in a region (raw data). Here the timestamping of individual values might be more relevant.

What do you think?
Agneta

--

Re: Adding provenance #ontology #intro #provenance

Agneta
 

Thanks for the document Bo

The document recommends timestamping of the datapoints and query outputs. Although I am unsure what degree will we be able to add provenance to each value on Exiobase. Although Exiobase does use data from multiple sources it adds some algorithms to provide a balanced dataset. In other words, its a secondary dataset (primary datasets are those which contain raw data)

If in future some values are changed, this leads to the publication of a new version of the dataset. So the provenance for all values in exiobase is generated as exiobase + (specific version).
My question is do we need to have provenance of individual values in a secondary dataset. Its different when we have minute by minute information of temperature change in a region (raw data). Here the timestamping of individual values might be more relevant.

What do you think?
Agneta

Re: Adding provenance #ontology #intro #provenance

Bo Weidema
 

Dear Emil and Agneta,

A warm velcome to Emil.

Re. provenance of the individual numbers and calculations, there is a good description in the section "Versioning and citation" in this document, relating to the recommedations from RDA. This is an elegant and efficient way of handling this issue, I think. I thought I had added that to the wiki, but right now I cannot find it (?).

Best regards

Bo

Den 2019-11-13 kl. 12.42 skrev Agneta:

Dear all

I would like to introduce Emil Riis Hansen to the Bonsai community. He has been recently employed as a research assistant with the computer science department at Aalborg University. 

Emil is interested in working with adding provenance to our current BONSAI ontology. 

Provenance helps us add information on the origin of data.i.e where does the data come from/ who generated the data/ licence of the data etc. We had discussed this issue during the hackathon but hadn't developed it since.
Currently Emil has proposed a high level provenance which is limited to determining the origin of the dataset and not individual values in it. For example, if anyone queries data from BONSAI, they will get the info that the data is sourced from Exiobase, but if other datasets are integrated to semantic web using Bonsai ontology, they will find information on the origin of that dataset. Provenance of individual values in a dataset is harder to determine as they may be calculated, estimated, or raw data from the data provider.

Emil is currently also preparing a conference paper with respect to how he plans to add provenance to the current ontology. For the purpose of this paper, it would be useful to upload the provenance information to the current rdf data we have on the Jena database. This will help the reviewers query the information as presented in the paper.

If anyone here has been working with provenance or are interested, please feel free to write to me.
Kind regards

Agneta

--

Re: Adding provenance #ontology #intro #provenance

Bo Weidema
 

Dear Emil and Agneta,

A warm velcome to Emil. Re. provenance of the individual numbers, there is a good description on the wiki, relating to the recommedations from RDA. This is an elegant and efficient way of handling this issue, I think.

Best regards

Bo

Den 2019-11-13 kl. 12.42 skrev Agneta:

Dear all

I would like to introduce Emil Riis Hansen to the Bonsai community. He has been recently employed as a research assistant with the computer science department at Aalborg University. 

Emil is interested in working with adding provenance to our current BONSAI ontology. 

Provenance helps us add information on the origin of data.i.e where does the data come from/ who generated the data/ licence of the data etc. We had discussed this issue during the hackathon but hadn't developed it since.
Currently Emil has proposed a high level provenance which is limited to determining the origin of the dataset and not individual values in it. For example, if anyone queries data from BONSAI, they will get the info that the data is sourced from Exiobase, but if other datasets are integrated to semantic web using Bonsai ontology, they will find information on the origin of that dataset. Provenance of individual values in a dataset is harder to determine as they may be calculated, estimated, or raw data from the data provider.

Emil is currently also preparing a conference paper with respect to how he plans to add provenance to the current ontology. For the purpose of this paper, it would be useful to upload the provenance information to the current rdf data we have on the Jena database. This will help the reviewers query the information as presented in the paper.

If anyone here has been working with provenance or are interested, please feel free to write to me.
Kind regards

Agneta

--

Adding provenance #ontology #intro #provenance

Agneta
 

Dear all

I would like to introduce Emil Riis Hansen to the Bonsai community. He has been recently employed as a research assistant with the computer science department at Aalborg University. 

Emil is interested in working with adding provenance to our current BONSAI ontology. 

Provenance helps us add information on the origin of data.i.e where does the data come from/ who generated the data/ licence of the data etc. We had discussed this issue during the hackathon but hadn't developed it since.
Currently Emil has proposed a high level provenance which is limited to determining the origin of the dataset and not individual values in it. For example, if anyone queries data from BONSAI, they will get the info that the data is sourced from Exiobase, but if other datasets are integrated to semantic web using Bonsai ontology, they will find information on the origin of that dataset. Provenance of individual values in a dataset is harder to determine as they may be calculated, estimated, or raw data from the data provider.

Emil is currently also preparing a conference paper with respect to how he plans to add provenance to the current ontology. For the purpose of this paper, it would be useful to upload the provenance information to the current rdf data we have on the Jena database. This will help the reviewers query the information as presented in the paper.

If anyone here has been working with provenance or are interested, please feel free to write to me.
Kind regards

Agneta

Re: Presentation by Wes Ingwersen, US EPA - Friday, Nov. 22 - 15:00 CET/9:00 EST

Bo Weidema
 

Bo Weidema is inviting you to a scheduled Zoom meeting.

Time: Nov 22, 2019 03:00 PM Copenhagen

Join Zoom Meeting
https://zoom.us/j/799298250

Meeting ID: 799 298 250

One tap mobile
+16468769923,,799298250# US (New York)
+14086380968,,799298250# US (San Jose)

Dial by your location
        +1 646 876 9923 US (New York)
        +1 408 638 0968 US (San Jose)
        +1 669 900 6833 US (San Jose)
Meeting ID: 799 298 250
Find your local number: https://zoom.us/u/acOELCujEO

Den 2019-11-11 kl. 23.26 skrev Chris Mutel:

Dear all-

I would like to invite Wes Ingwersen to present the work that the US
EPA is doing on open data infrastructure and associated efforts to the
BONSAI group. We have had a "hang out" on Friday afternoons before,
and that time works well for him, so those who are interested and
available are welcome to join.

Bo, can you send out a Zoom link?

-Chris

--

Presentation by Wes Ingwersen, US EPA - Friday, Nov. 22 - 15:00 CET/9:00 EST

 

Dear all-

I would like to invite Wes Ingwersen to present the work that the US
EPA is doing on open data infrastructure and associated efforts to the
BONSAI group. We have had a "hang out" on Friday afternoons before,
and that time works well for him, so those who are interested and
available are welcome to join.

Bo, can you send out a Zoom link?

-Chris

--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Intro Mail

matthias.dollfuss@...
 

Hello at All,

 

My Name is Matthias Dollfuss (27), I’m from Austria studying Eco-Design (Ecological Product-Development) at the University of Applied Science in Wieselburg.

Before studying Eco-Design I did Web Development in Python (Plone). I’m also in a Start-Up where we want to change the Fashion Industry to be more Sustainable.

 

I came across BONSAI cause I was searching for a new open source tool for doing LCA’s and displaying them also on E-Commerce stores,

so that there would be more transparency about environmental impacts of the different products.

I’m searching for a Master Thesis Project, which combines LCA and Web-Development, so I registered in your Mail group to find out if Bonsai would be an option, for me.

 

I was wondering about the actual state of the Project, maybe you could give me more information about that. 😊

 

 

All the best

 

Matthias Dollfuss

 

 

 

 

 

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Stefano Merciai
 

Hi,

please see my answer below.


On 08/11/2019 13:16, Bo Weidema wrote:
Den 2019-11-08 kl. 10.59 skrev Chris Mutel:

A trade activity (does trade need to be flow-object specific? In the future, does it need to be specific to transport mode?)
International trade activities, i.e. activities that move use of an export to become an import supply, do need to be FlowObject specific. Each such trade activity can have inputs of different transport modes, insurance, and other trade facilitation activities, as well as outputs of product losses. However, in Exiobase, these international trade activities are (as far as I understand) integrated in the national data, so to avoid double counting can be ignored. Of course, at some point in time, someone will want to separate these out from the national tables, but that is not required for now.
SM: Yes, transports, insurance, etc. linked to imports are just accounted separately as normal inputs, but non necessarily in the national data. They can also be imported if the logistic company is foreign.
One minor technical question - do we take trade data from the supply or use table? From my uninformed perspective, I would expect this data to be the same in both tables, but I don't underestimate the ability of data providers to surprise me anymore!

The trade data appear as imports in the Exiobase use table. Note that this is different from the way it appears in national supply-use tables.

The above is according to the best of my knowledge. I assume Stefano can correct me if I should be wrong.
SM: You are right. In Exiobase, given a country, if you sum up the rest-of-countries rows in the use table, you get the imports. Instead, if you sum up the rest-of-countries columns you get the exports. As for supply-use tables provided by statistical offices, usually you see a row-vector of imports in the use table (below the domestic uses) and a column-vector in the supply table (on the right, after the domestic productions). However, some countries also provide a product-by-industry matrix of imports.

Bo






--

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Massimo Pizzol
 

>>>The trade data appear as imports in the Exiobase use table. Note that  this is different from the way it appears in national supply-use tables.

 

I am the only one who got lost? The trade data you refer to are just the input flows from other countries? See example in the MRIO table below.

 

 

Sector A, country X

Sector A, country Y

Sector B, Country Y

Product a, country X

 

Trade input

 trade input

Product a, country Y

 trade input

 non-trade input

Product b, Country Y

 trade input

non-trade input

 

 

 

Massimo

 

From: <main@bonsai.groups.io> on behalf of "Bo Weidema via Groups.Io" <bo.weidema@...>
Reply-To: "main@bonsai.groups.io" <main@bonsai.groups.io>
Date: Friday, 8 November 2019 at 13.17
To: "main@bonsai.groups.io" <main@bonsai.groups.io>
Subject: Re: [bonsai] #rdf #issues #exiobase

 

Den 2019-11-08 kl. 10.59 skrev Chris Mutel:

 

A trade activity (does trade need to be flow-object specific? In the

future, does it need to be specific to transport mode?)

International trade activities, i.e. activities that move use of an

export to become an import supply, do need to be FlowObject specific.

Each such trade activity can have inputs of different transport modes,

insurance, and other trade facilitation activities, as well as outputs

of product losses. However, in Exiobase, these international trade

activities are (as far as I understand) integrated in the national data,

so to avoid double counting can be ignored. Of course, at some point in

time, someone will want to separate these out from the national tables,

but that is not required for now.

One minor technical question - do we take trade data from the supply

or use table? From my uninformed perspective, I would expect this data

to be the same in both tables, but I don't underestimate the ability

of data providers to surprise me anymore!

 

The trade data appear as imports in the Exiobase use table. Note that

this is different from the way it appears in national supply-use tables.

 

The above is according to the best of my knowledge. I assume Stefano can

correct me if I should be wrong.

 

Bo

 

 

 

 

 

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Bo Weidema
 

Den 2019-11-08 kl. 10.59 skrev Chris Mutel:

A trade activity (does trade need to be flow-object specific? In the future, does it need to be specific to transport mode?)
International trade activities, i.e. activities that move use of an export to become an import supply, do need to be FlowObject specific. Each such trade activity can have inputs of different transport modes, insurance, and other trade facilitation activities, as well as outputs of product losses. However, in Exiobase, these international trade activities are (as far as I understand) integrated in the national data, so to avoid double counting can be ignored. Of course, at some point in time, someone will want to separate these out from the national tables, but that is not required for now.
One minor technical question - do we take trade data from the supply or use table? From my uninformed perspective, I would expect this data to be the same in both tables, but I don't underestimate the ability of data providers to surprise me anymore!
The trade data appear as imports in the Exiobase use table. Note that this is different from the way it appears in national supply-use tables.

The above is according to the best of my knowledge. I assume Stefano can correct me if I should be wrong.

Bo

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Stefano Merciai
 

Hi Chris, trade data is only in the USE tables. The SUPPLY tables just show the production of activities and the location of the outputs is the same of the activities which produce them. Roughly speaking, the supply table is a diagonalized vector.


On 08/11/2019 10:59, Chris Mutel wrote:
Thanks Bo, this is clear and (at least in my opinion) completely consistent with all group decisions. Maybe I just missed it - entirely possible! - but the issue discussions did not seem to have this clear framework. I will quote from this liberally when updating the software documentation.

If I understand correctly, we take the classic activity (maybe transforming activity :) and market approach, in that:
- Activities consume from and produce to markets
- All trade is between markets

The EXIOBASE importer will then need to create triples:
For each activity:
  In each place:
    For each flow object:
      A supply flow to the national market, if non-zero
      A use flow from the national market, if non-zero

For each place alpha
  For each other place beta
    For each flow object
      A trade flow from the national market in alpha to the national market in beta (trade volume is the sum of data in EXIOBASE)

The EXIOBASE RDF URI creator will need to create:

For each activity
  In each place
    An activity
    A market activity (currently missing, AFAICT)
For each flow object
  A flow object
For each place
  A place
  For every other place
    A trade activity (does trade need to be flow-object specific? In the future, does it need to be specific to transport mode?) (currently missing, AFAICT)

One minor technical question - do we take trade data from the supply or use table? From my uninformed perspective, I would expect this data to be the same in both tables, but I don't underestimate the ability of data providers to surprise me anymore!

Comments and clarifications welcome!

-Chris

On Thu, 7 Nov 2019 at 15:01, Bo Weidema <bo.weidema@...> wrote:

Dear Chris,

The issue is not about interpreting or deleting data, but simply to make the import of it consistent with our ontology. The Exiobase SUT data includes two kinds of data 1) inputs and outputs of industries ("production activities") 2) inputs and outputs of bi-lateral trade ("market activities"). Both kinds of flow data being SUT data, they do not have location as a property, i.e. the input flows are not linked to an origin, and the output flows are not linked to a destination. This linking is what happens when we produce the Direct Requirement Matrix using the product system algorithms on the SUT.

Since the Exiobase SUT has a convention of integrating the bi-lateral market data in the data for each industry, our import algorithm needs to separate these two kinds of data, to make them consistent with the ontology, but more importantly to make them useful for the later linking.

This is done by:

1) Placing the information on bi-lateral trade flows in their respective market activities (for each of the 169 products, there are 49*49 bi-lateral markets, many of which will be empty (having no flows), and may therefore be ignored). This takes care of the disaggregated import data of the industries.

2) Aggregating the disaggregated import data of the industries, so that each industry only have 169 imported products, not the current 49*169 (since that information is already present in the above bi-lateral trade data).

This way of organising the Exiobase import preserves all data intact, and now in a more meaningful format that allows use by any relevant product system algorithm.

Of course, this transformation is completely transparent and an alternative could be to make an Exiobase ontology term for this "exiobase:import origin" and use this for importing the Exiobase data to RDF with this term attached and then do the "stripping" to BONSAI ontology in RDF. However, this would create precedence for making RDF ontologies for all other strange data formats that poeople wish to provide and making RDF converters for these. I do not think that is a road that we would want to go down. The whole purpose of the BONSAI ontology is to be lean and nevertheless complete enough to allow loss-less import of all the different kind of data people may wish to provide.

I hope this clarifies the reason for staying with the BONSAI ontology on this point and to adapt the import so that loss-less imprt of Exiobase nevertheless is possible.

Best regards

Bo

Den 2019-11-07 kl. 12.28 skrev Chris Mutel:
Sorry, I don't want to be a pain in the ass, but I think we are going
a bit down the raod of Stalin's ideological purity here... In an ideal
world, we would have separate sources of trade data, and could ignore
the EXIOBASE trade "assumptions"; but in this ideal world, we could
also take the SU tables directly from each country. EXIOBASE has the
trade information, and it is balanced. We need this trade data, and
don't really want to start a whole new project to get it from another
source (and clean it!). Bo says that "In a true SUT, the flows enter
and leave an activity but do not yet have information on their origin
and destination," but EXIOBASE is not just a SUT, it is also trade
data.

"The EXIOBASE SUT is overspecified in this sense that it already has
interpreted the information in the trade statistics in a specific
(attributional) way. This error should not be imported into the BONSAI
implementation, which should leave the user free to link SUT
activities with different linking algorithms." But we are free to
(re-)link SUT activities with different linking algorithms, even if we
import this data! All data is BONSAI are factual claims that we can
use or ignore as we wish.

We go here to a fundamental decision for the entire project, namely:
Should we let our collective or individual biases lead to data
modification **before** it enters the system? It was my impression
that our consensus decision from the hackathon was that we do not
alter or delete data before it enters the system, unless such
modification would never be controversial in any way (i.e. unit
conversions or changing labels in cases where there is zero
ambiguity). Did this change? I don't accept that it changed in a
comment in a Github issue where two people reported that they
discussed something offline.

On Wed, 6 Nov 2019 at 13:27, Matteo Lissandrini (AAU) <matteo@...> wrote:
Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?
But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.
Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo

--


--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

--

Re: #rdf #issues #exiobase #rdf #issues #exiobase

 

Thanks Bo, this is clear and (at least in my opinion) completely consistent with all group decisions. Maybe I just missed it - entirely possible! - but the issue discussions did not seem to have this clear framework. I will quote from this liberally when updating the software documentation.

If I understand correctly, we take the classic activity (maybe transforming activity :) and market approach, in that:
- Activities consume from and produce to markets
- All trade is between markets

The EXIOBASE importer will then need to create triples:
For each activity:
  In each place:
    For each flow object:
      A supply flow to the national market, if non-zero
      A use flow from the national market, if non-zero

For each place alpha
  For each other place beta
    For each flow object
      A trade flow from the national market in alpha to the national market in beta (trade volume is the sum of data in EXIOBASE)

The EXIOBASE RDF URI creator will need to create:

For each activity
  In each place
    An activity
    A market activity (currently missing, AFAICT)
For each flow object
  A flow object
For each place
  A place
  For every other place
    A trade activity (does trade need to be flow-object specific? In the future, does it need to be specific to transport mode?) (currently missing, AFAICT)

One minor technical question - do we take trade data from the supply or use table? From my uninformed perspective, I would expect this data to be the same in both tables, but I don't underestimate the ability of data providers to surprise me anymore!

Comments and clarifications welcome!

-Chris


On Thu, 7 Nov 2019 at 15:01, Bo Weidema <bo.weidema@...> wrote:

Dear Chris,

The issue is not about interpreting or deleting data, but simply to make the import of it consistent with our ontology. The Exiobase SUT data includes two kinds of data 1) inputs and outputs of industries ("production activities") 2) inputs and outputs of bi-lateral trade ("market activities"). Both kinds of flow data being SUT data, they do not have location as a property, i.e. the input flows are not linked to an origin, and the output flows are not linked to a destination. This linking is what happens when we produce the Direct Requirement Matrix using the product system algorithms on the SUT.

Since the Exiobase SUT has a convention of integrating the bi-lateral market data in the data for each industry, our import algorithm needs to separate these two kinds of data, to make them consistent with the ontology, but more importantly to make them useful for the later linking.

This is done by:

1) Placing the information on bi-lateral trade flows in their respective market activities (for each of the 169 products, there are 49*49 bi-lateral markets, many of which will be empty (having no flows), and may therefore be ignored). This takes care of the disaggregated import data of the industries.

2) Aggregating the disaggregated import data of the industries, so that each industry only have 169 imported products, not the current 49*169 (since that information is already present in the above bi-lateral trade data).

This way of organising the Exiobase import preserves all data intact, and now in a more meaningful format that allows use by any relevant product system algorithm.

Of course, this transformation is completely transparent and an alternative could be to make an Exiobase ontology term for this "exiobase:import origin" and use this for importing the Exiobase data to RDF with this term attached and then do the "stripping" to BONSAI ontology in RDF. However, this would create precedence for making RDF ontologies for all other strange data formats that poeople wish to provide and making RDF converters for these. I do not think that is a road that we would want to go down. The whole purpose of the BONSAI ontology is to be lean and nevertheless complete enough to allow loss-less import of all the different kind of data people may wish to provide.

I hope this clarifies the reason for staying with the BONSAI ontology on this point and to adapt the import so that loss-less imprt of Exiobase nevertheless is possible.

Best regards

Bo

Den 2019-11-07 kl. 12.28 skrev Chris Mutel:
Sorry, I don't want to be a pain in the ass, but I think we are going
a bit down the raod of Stalin's ideological purity here... In an ideal
world, we would have separate sources of trade data, and could ignore
the EXIOBASE trade "assumptions"; but in this ideal world, we could
also take the SU tables directly from each country. EXIOBASE has the
trade information, and it is balanced. We need this trade data, and
don't really want to start a whole new project to get it from another
source (and clean it!). Bo says that "In a true SUT, the flows enter
and leave an activity but do not yet have information on their origin
and destination," but EXIOBASE is not just a SUT, it is also trade
data.

"The EXIOBASE SUT is overspecified in this sense that it already has
interpreted the information in the trade statistics in a specific
(attributional) way. This error should not be imported into the BONSAI
implementation, which should leave the user free to link SUT
activities with different linking algorithms." But we are free to
(re-)link SUT activities with different linking algorithms, even if we
import this data! All data is BONSAI are factual claims that we can
use or ignore as we wish.

We go here to a fundamental decision for the entire project, namely:
Should we let our collective or individual biases lead to data
modification **before** it enters the system? It was my impression
that our consensus decision from the hackathon was that we do not
alter or delete data before it enters the system, unless such
modification would never be controversial in any way (i.e. unit
conversions or changing labels in cases where there is zero
ambiguity). Did this change? I don't accept that it changed in a
comment in a Github issue where two people reported that they
discussed something offline.

On Wed, 6 Nov 2019 at 13:27, Matteo Lissandrini (AAU) <matteo@...> wrote:

        
Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?

        
But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.
Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo


    
--



--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Matteo Lissandrini (AAU)
 

Hi all,


I am not in the position of taking sides.

I just want to provide one clarification here.


Both options (keep the data as is in Exiobase, or re-aggregate) are compatible with the ontology. I would say we should be quite happy with this result per se, is not a minor feat! :)


Both option require an extra step when importing the data:

 - keep the data as is requires the instantiation of the "activity" that links the flow with the location so that the flow is an output of that activity, I say instantiation, this is the same thing we do when instantiating the URI for Paddy-rice or for Rest of Europe. So this is not a change in the ontology

 - re-aggregate requires during conversion to take all the splitted flows and to sum  them up in a single flow.



Cheers,

Matteo







---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo






From: main@bonsai.groups.io <main@bonsai.groups.io> on behalf of Bo Weidema via Groups.Io <bo.weidema@...>
Sent: Thursday, November 7, 2019 3:01:32 PM
To: main@bonsai.groups.io
Subject: Re: [bonsai] #rdf #issues #exiobase
 

Dear Chris,

The issue is not about interpreting or deleting data, but simply to make the import of it consistent with our ontology. The Exiobase SUT data includes two kinds of data 1) inputs and outputs of industries ("production activities") 2) inputs and outputs of bi-lateral trade ("market activities"). Both kinds of flow data being SUT data, they do not have location as a property, i.e. the input flows are not linked to an origin, and the output flows are not linked to a destination. This linking is what happens when we produce the Direct Requirement Matrix using the product system algorithms on the SUT.

Since the Exiobase SUT has a convention of integrating the bi-lateral market data in the data for each industry, our import algorithm needs to separate these two kinds of data, to make them consistent with the ontology, but more importantly to make them useful for the later linking.

This is done by:

1) Placing the information on bi-lateral trade flows in their respective market activities (for each of the 169 products, there are 49*49 bi-lateral markets, many of which will be empty (having no flows), and may therefore be ignored). This takes care of the disaggregated import data of the industries.

2) Aggregating the disaggregated import data of the industries, so that each industry only have 169 imported products, not the current 49*169 (since that information is already present in the above bi-lateral trade data).

This way of organising the Exiobase import preserves all data intact, and now in a more meaningful format that allows use by any relevant product system algorithm.

Of course, this transformation is completely transparent and an alternative could be to make an Exiobase ontology term for this "exiobase:import origin" and use this for importing the Exiobase data to RDF with this term attached and then do the "stripping" to BONSAI ontology in RDF. However, this would create precedence for making RDF ontologies for all other strange data formats that poeople wish to provide and making RDF converters for these. I do not think that is a road that we would want to go down. The whole purpose of the BONSAI ontology is to be lean and nevertheless complete enough to allow loss-less import of all the different kind of data people may wish to provide.

I hope this clarifies the reason for staying with the BONSAI ontology on this point and to adapt the import so that loss-less imprt of Exiobase nevertheless is possible.

Best regards

Bo

Den 2019-11-07 kl. 12.28 skrev Chris Mutel:
Sorry, I don't want to be a pain in the ass, but I think we are going
a bit down the raod of Stalin's ideological purity here... In an ideal
world, we would have separate sources of trade data, and could ignore
the EXIOBASE trade "assumptions"; but in this ideal world, we could
also take the SU tables directly from each country. EXIOBASE has the
trade information, and it is balanced. We need this trade data, and
don't really want to start a whole new project to get it from another
source (and clean it!). Bo says that "In a true SUT, the flows enter
and leave an activity but do not yet have information on their origin
and destination," but EXIOBASE is not just a SUT, it is also trade
data.

"The EXIOBASE SUT is overspecified in this sense that it already has
interpreted the information in the trade statistics in a specific
(attributional) way. This error should not be imported into the BONSAI
implementation, which should leave the user free to link SUT
activities with different linking algorithms." But we are free to
(re-)link SUT activities with different linking algorithms, even if we
import this data! All data is BONSAI are factual claims that we can
use or ignore as we wish.

We go here to a fundamental decision for the entire project, namely:
Should we let our collective or individual biases lead to data
modification **before** it enters the system? It was my impression
that our consensus decision from the hackathon was that we do not
alter or delete data before it enters the system, unless such
modification would never be controversial in any way (i.e. unit
conversions or changing labels in cases where there is zero
ambiguity). Did this change? I don't accept that it changed in a
comment in a Github issue where two people reported that they
discussed something offline.

On Wed, 6 Nov 2019 at 13:27, Matteo Lissandrini (AAU) <matteo@...> wrote:

Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?

But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.
Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo



--

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Bo Weidema
 

Dear Chris,

The issue is not about interpreting or deleting data, but simply to make the import of it consistent with our ontology. The Exiobase SUT data includes two kinds of data 1) inputs and outputs of industries ("production activities") 2) inputs and outputs of bi-lateral trade ("market activities"). Both kinds of flow data being SUT data, they do not have location as a property, i.e. the input flows are not linked to an origin, and the output flows are not linked to a destination. This linking is what happens when we produce the Direct Requirement Matrix using the product system algorithms on the SUT.

Since the Exiobase SUT has a convention of integrating the bi-lateral market data in the data for each industry, our import algorithm needs to separate these two kinds of data, to make them consistent with the ontology, but more importantly to make them useful for the later linking.

This is done by:

1) Placing the information on bi-lateral trade flows in their respective market activities (for each of the 169 products, there are 49*49 bi-lateral markets, many of which will be empty (having no flows), and may therefore be ignored). This takes care of the disaggregated import data of the industries.

2) Aggregating the disaggregated import data of the industries, so that each industry only have 169 imported products, not the current 49*169 (since that information is already present in the above bi-lateral trade data).

This way of organising the Exiobase import preserves all data intact, and now in a more meaningful format that allows use by any relevant product system algorithm.

Of course, this transformation is completely transparent and an alternative could be to make an Exiobase ontology term for this "exiobase:import origin" and use this for importing the Exiobase data to RDF with this term attached and then do the "stripping" to BONSAI ontology in RDF. However, this would create precedence for making RDF ontologies for all other strange data formats that poeople wish to provide and making RDF converters for these. I do not think that is a road that we would want to go down. The whole purpose of the BONSAI ontology is to be lean and nevertheless complete enough to allow loss-less import of all the different kind of data people may wish to provide.

I hope this clarifies the reason for staying with the BONSAI ontology on this point and to adapt the import so that loss-less imprt of Exiobase nevertheless is possible.

Best regards

Bo

Den 2019-11-07 kl. 12.28 skrev Chris Mutel:

Sorry, I don't want to be a pain in the ass, but I think we are going
a bit down the raod of Stalin's ideological purity here... In an ideal
world, we would have separate sources of trade data, and could ignore
the EXIOBASE trade "assumptions"; but in this ideal world, we could
also take the SU tables directly from each country. EXIOBASE has the
trade information, and it is balanced. We need this trade data, and
don't really want to start a whole new project to get it from another
source (and clean it!). Bo says that "In a true SUT, the flows enter
and leave an activity but do not yet have information on their origin
and destination," but EXIOBASE is not just a SUT, it is also trade
data.

"The EXIOBASE SUT is overspecified in this sense that it already has
interpreted the information in the trade statistics in a specific
(attributional) way. This error should not be imported into the BONSAI
implementation, which should leave the user free to link SUT
activities with different linking algorithms." But we are free to
(re-)link SUT activities with different linking algorithms, even if we
import this data! All data is BONSAI are factual claims that we can
use or ignore as we wish.

We go here to a fundamental decision for the entire project, namely:
Should we let our collective or individual biases lead to data
modification **before** it enters the system? It was my impression
that our consensus decision from the hackathon was that we do not
alter or delete data before it enters the system, unless such
modification would never be controversial in any way (i.e. unit
conversions or changing labels in cases where there is zero
ambiguity). Did this change? I don't accept that it changed in a
comment in a Github issue where two people reported that they
discussed something offline.

On Wed, 6 Nov 2019 at 13:27, Matteo Lissandrini (AAU) <matteo@...> wrote:

Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?

        
But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.
Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo



--

EXIOBASE extensions - what do we want?

 

Please see https://github.com/BONSAMURAIS/EXIOBASE-conversion-software/issues/5,
and add your input (or reply to this email).

--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Re: #rdf #issues #exiobase #rdf #issues #exiobase

 

Sorry, I don't want to be a pain in the ass, but I think we are going
a bit down the raod of Stalin's ideological purity here... In an ideal
world, we would have separate sources of trade data, and could ignore
the EXIOBASE trade "assumptions"; but in this ideal world, we could
also take the SU tables directly from each country. EXIOBASE has the
trade information, and it is balanced. We need this trade data, and
don't really want to start a whole new project to get it from another
source (and clean it!). Bo says that "In a true SUT, the flows enter
and leave an activity but do not yet have information on their origin
and destination," but EXIOBASE is not just a SUT, it is also trade
data.

"The EXIOBASE SUT is overspecified in this sense that it already has
interpreted the information in the trade statistics in a specific
(attributional) way. This error should not be imported into the BONSAI
implementation, which should leave the user free to link SUT
activities with different linking algorithms." But we are free to
(re-)link SUT activities with different linking algorithms, even if we
import this data! All data is BONSAI are factual claims that we can
use or ignore as we wish.

We go here to a fundamental decision for the entire project, namely:
Should we let our collective or individual biases lead to data
modification **before** it enters the system? It was my impression
that our consensus decision from the hackathon was that we do not
alter or delete data before it enters the system, unless such
modification would never be controversial in any way (i.e. unit
conversions or changing labels in cases where there is zero
ambiguity). Did this change? I don't accept that it changed in a
comment in a Github issue where two people reported that they
discussed something offline.

On Wed, 6 Nov 2019 at 13:27, Matteo Lissandrini (AAU) <matteo@...> wrote:


Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?
But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.
Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo
--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Matteo Lissandrini (AAU)
 


>> Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?

> But we need to update this software anyway to a) make it a proper
> installable package, b) follow the URI schema that we are now using,
> and c) fill in all the "TODO"s in the code.

Ok, I had the task to update this script for the USE table, as per other email, but then I'm blocked until I have the new output.

This means that for now, the published data is only for the supply table.

We will need to re-sync for completing this work.

Thanks,
Matteo

Re: #rdf #issues #exiobase #rdf #issues #exiobase

Matteo Lissandrini (AAU)
 

Hi Chris,


for how the data is now in the publicly available excel file, we can ignore for the SUPPLY, but we cannot "just ignore" the location for the USE table, we need to recompute the aggregates:


See https://github.com/BONSAMURAIS/rdf/issues/4


---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo






From: main@bonsai.groups.io <main@bonsai.groups.io> on behalf of Chris Mutel via Groups.Io <cmutel@...>
Sent: Wednesday, November 6, 2019 12:01:39 PM
To: main@bonsai.groups.io
Subject: Re: [bonsai] #rdf #issues #exiobase
 
On Wed, 6 Nov 2019 at 11:42, Agneta <agneta.20@...> wrote:
> The disaggregation to country level data is mainly to avoid the assumptions made using the trade matrix (info on markets for different flow objects). Current SUT tables include this information, given as location of flow objects.

OK, but we can just... ignore... this location? Each national S or U
table by definition only has one location anyway.

In any case, I don't think there needs to be a discussion here, as our
ontology does not allow flow objects to have a location, so we can't
input this data in the first place.

> This information is not necessary for the raw SUT data. Assumptions with respect to trade matrix can be added later as I discussed with Stefano.
>  The discussion on this was elaborated by Bo and updated on the read me for the MOJO repository ( https://github.com/BONSAMURAIS/mojo#why-flows-do-not-have-a-location)
>
> Best
> Agneta
>
> On Wed, 6 Nov 2019, 10:34 AM Chris Mutel, <cmutel@...> wrote:
>>
>> Sorry for chiming in late here.
>>
>> +1 to the idea of only using the publicly available data, it seems to
>> me a very unnecessary and dangerous step to start using private (and
>> unlicensed!) data at the beginning of our journey.
>>
>> I have, after some effort, convert the hybrid IO table to a tidy
>> format, with proper packaging and metadata in the data package format.
>> The data is here:
>> http://files.brightwaylca.org/exiobase-3.3.17-hybrid.tar, the code is
>> here: https://github.com/brightway-lca/mrio_common_metadata/tree/master/mrio_common_metadata/conversion/exiobase_3_hybrid.
>> I would be happy to do this for the SU tables as well.
>>
>> Using the tidy format, and standards packaging, would allow us to have
>> one importer that would work for multiple MRIO/SU databases, which
>> seems to me to be a big win.
>>
>> Agneta, your diagrams are great, but could you explain *why* we would
>> want to separate out the individual country tables? Is this just to
>> get the raw country data before it is processed by the EXIOBASE
>> balancing algorithm? Separating out the country tables from the
>> provided data seems easy but unnecessary.
>>
>> -C
>>
>> On Mon, 4 Nov 2019 at 16:35, Bo Weidema <bo.weidema@...> wrote:
>> >
>> > Dear Stefano,
>> >
>> > Thanks for your reflections. In the long run, BONSAI should of course not rely on Exiobase or any other processed data, but on as a "raw" data source as possible. For IO tables, this means national Supply Use data, obtained directly from the national statistical agencies etc. Back-calulating these from Exiobase would in all cases just be a temporary solution to get a good start with a relatively complete database.
>> >
>> > Best regards
>> >
>> > Bo
>> >
>> > Den 2019-11-04 kl. 11.16 skrev Stefano Merciai:
>> >
>> > Hi all,
>> >
>> > I can provide the Exiobase hybrid version of national tables but I wonder whether this is the way to go. I try to explain it better.
>> >
>> > Exiobase is a multi-regional IO database, therefore on the website you can only download MR-tables. Of course who contributes to Exiobase has as intermediate result the national tables stored somewhere but not on the official Exiobase website. But then, does Bonsai rely on publicly available data or on 'confidential' data? In the light of future updates, what is the best way to go?
>> >
>> > And what if we intend to use other world IO databases such as the WIOD? We will have the same problem.
>> >
>> > However, moving from MR-tables to a national table is a quite simple procedure because we just need to aggregate rows and columns. This procedure could be easily generalized for future imports of multi-regional databases. The only issue I see here is that Exiobase will be stored twice on the Bonsai storage.
>> >
>> > Best,
>> >
>> > Stefano
>> >
>> >
>> >
>> >
>> > On 01/11/2019 15:45, Bo Weidema wrote:
>> >
>> > I am aware that we have priority access as consortium members. I was hinting to making this publicly available...
>> >
>> > Bo
>> >
>> > Den 2019-11-01 kl. 15.28 skrev Konstantin Stadler:
>> >
>> > Hi Bo/2.0 LCA Team
>> >
>> > You (trough Stefano) have access to the repository having the country SUT tables. I can add more people to the box account if needed.
>> >
>> > Best
>> > Konstantin
>> >
>> > On 30/10/2019 17:35, Bo Weidema wrote:
>> >
>> > Den 2019-10-30 kl. 16.07 skrev Agneta:
>> >
>> > Do we need to have a separate repo that breaks the exiobase SUT from the website ? We could also get these country specific tables directly from Stefano.
>> >
>> > Why do the EXIOBASE consortium not just publish these on the website, saving us the hazzle of re-constructing the original data?
>> >
>> > Bo
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> >
>> >
>> > --
>> >
>> > --
>> >
>>
>>
>>
>> --
>> ############################
>> Chris Mutel
>> Technology Assessment Group, LEA
>> Paul Scherrer Institut
>> OHSA D22
>> 5232 Villigen PSI
>> Switzerland
>> http://chris.mutel.org
>> Telefon: +41 56 310 5787
>> ############################
>>
>>
>>
>



--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################



Re: #rdf #issues #exiobase #rdf #issues #exiobase

 

On Wed, 6 Nov 2019 at 11:30, Matteo Lissandrini (AAU) <matteo@...> wrote:
unfortunately half of what you say is alien to me, but I have the impression that this is about the numeric data that the RDF converter is handling.
Indeed, this was not clear. Sorry. I was referring to
https://frictionlessdata.io/specs/data-package/ and
https://frictionlessdata.io/specs/tabular-data-resource/ and
https://en.wikipedia.org/wiki/Tidy_data; see also attached the
datapackage.json file for the IO tables.

Will this require to write from scratch the Exiobase RDF converter? do I understand correctly or is this about some other data?
No, the "intermediate" file is basically the same;
https://github.com/BONSAMURAIS/EXIOBASE-conversion-software/blob/master/scripts/excel2csv.py
does more or less the same thing, just less completely (not all
extensions, doesn't check to make sure EXIOBASE authors were
consistent in row and column labels, doesn't correct typos or clean
whitespace), and without providing metadata following a specification.
But we need to update this software anyway to a) make it a proper
installable package, b) follow the URI schema that we are now using,
and c) fill in all the "TODO"s in the code.