Date   

Re: Start of the #ontology sub-group #ontology

Stefano Merciai
 

I think that a square matrix can just be done by aggregation.

Best,

Stefano


On 20/03/2019 16:33, Massimo Pizzol wrote:

>>>2. We don't need the concept of a reference flow to make a technosphere matrix,

 

 

But we need that to make a square matrix. The question is whether it needs to be specified in the ontology or can be done externally like in the case of the biosphere.

Massimo

 


-- 
Best,
S.


Re: Start of the #ontology sub-group #ontology

Massimo Pizzol
 

>>>2. We don't need the concept of a reference flow to make a technosphere matrix,

 

 

But we need that to make a square matrix. The question is whether it needs to be specified in the ontology or can be done externally like in the case of the biosphere.

Massimo

 


Meeting #rdf #ontology Zoom meeting details #rdf #ontology

Agneta
 

Hi everyone

There are lots of interesting discussions coming up and we are looking forward to finalizing an ontology for the hackathon
All group members and interested parties are invited to join the Zoom meeting :

Topic: BONSAI Ontology group
Time: Mar 22, 2019 9:00 AM Copenhagen

Join Zoom Meeting
https://zoom.us/j/521431530

Regards

Agneta


Re: Start of the #ontology sub-group #ontology

Agneta
 

Hi Elias (and others)

Just a few comments on some of the points you have mentioned:

>>Use of subclasses for input and output VS Use of predicates isInputOf and isOutputOf
 I am aware that this question has caused us much confusion and I have some clarifications on this- A class or a sub-class contain a unique set of Unique Resource Identifier (URIs). All components of an RDF triple (Subject.- Predicate- Object) have URIs. Having subclasses helps users of rdf data to develop better queries (to search the data for what you are looking for).
In other words as Massimo said- “get the answer we want by making the right question”. Ofcourse all the flows (irrespective of input or output) could be bundled under the class Flow but it makes our queries a bit more tedious.

>>How do we make the ontology/database usable for LCA-people if it does not have LCA-specific information in it? 
Good question and this is one of my main concern. Although it is clear to us and everyone on board here that BONSAI aims to develop a data merging platform for all areas of industrial ecology (LCA, MFA, IO, etc.). This is the reason to keeping our primary BONSAI ontology minimal. 

Now lets say as an LCA research group we are interested in structuring our data in a traditional way (e.g. product, by-product, emission)/ (Impact methods and characterization factors); we can develop a secondary ontology which continues to use BONSAI as the primary ontology and build on top of it. Eg. all my segregations of(product, by-product, emission) can be a sub-class of Bonsai: Flow object. 

But we dont want to do this now as adding complexity to the ontology will be a barrier to its uptake among different IE groups.

 

Thanks again for your comments we will bring some discussion on these issues on our meeting this Friday.
Regards

Agneta


Re: Start of the #ontology sub-group #ontology

Stefano Merciai
 

Hi,

Thank you for the nice exchange of ideas. I just want to add some little inputs.

How do we distinguish between CO2 emitted from the chimney and the CO2 used for soft drinks?

Then I think that the value of "waste/residues" is determined by more factors. Homogeneity of materials for example. The same material, if mixed to other waste flows, may have a lower value (even negative) because a service of waste separation is needed. So the final price of the waste flow may be the value of the material, which can be somehow fixed, less the cost of the separation. This to say that there could be properties, such as " sorted" and “unsorted”, that could indicate if it is a waste flow or not.

As for the reference flow, I think that the classification of activity gives an idea of the reference flow. A coal mining activity will have coal as reference flow (or perhaps it is the other way around, if coal is the output of an activity, then that activity is a coal mining). If we intend to insert economic values, such as prices, the determining products could be that resulting into more revenues for the activity. However, by doing that, the classification of the activity may change. I think Elias mentioned the CHP plants, where both heat and electricity may be determing flows depending on the period of the year/day.

Last thing, there are  values that are important when building a database, such as combustion coefficients (emissions produced in the activity act123 when burning fuel123). Are these properties of products?

Best,

Stafano




On 20/03/2019 15:43, Massimo Pizzol wrote:
Ok thanks so this is the solution I was asking for. We can separate between technosphere and rest by using an external list of names.  And yes you are right about the matrix operation it will work even if the order of columns and rows is not the same. So we neither need the ref flow predicate nor any product subclass in the ontology.
Massimo
On 20 Mar 2019, at 15.29, Chris Mutel via Groups.Io <cmutel@...> wrote:

On Wed, 20 Mar 2019 at 12:30, Massimo Pizzol <massimo@...> wrote:
“product”, “emission”, etc. are subjective.

Agree, and formalizing them limits our flexibility. But indeed some of those might be useful to work in LCA context. I think that the only two pieces of information we actually need for doing LCA are: if a flow belongs to the technosphere (all the rest is B matrix) and if a flow is a reference flow (diagonal of tech matrix). Right now I can’t think of any automatic way of determining this information from a raw list of inputs and outputs. So we have to include this info in the ontology because we can’t use an algorithm or write a code to figure this out. But perhaps I am wrong and somebody in the group has a solution for this and then we can skip these classifications altogether, that would be perfect. I also recognize that this means introducing some subjective elements in the model, because who decides what is technosphere? But as I wrote before if we want to use the liked data for LCA we have to accept that there is an LCA framework.
This is a great comment, and is to me a perfect example of how
people's experience leads them to accept restraints without even
realizing it.

1. Mathematically, we don't need to distinguish between technosphere
and biosphere, this can be one big matrix. In practical terms, our
biosphere will be a different set of names; or, they will be flows for
which there is no associated producing activity.

2. We don't need the concept of a reference flow to make a
technosphere matrix, and there isn't anything special about positive
numbers of the diagonal. Production amounts can be randomly ordered,
and in any case everything produced is positive, everything consumed
is negative, regardless of whether it is a reference product,
co-product, or whatever. The notion of reference product is helpful
for humans trying to understand the reason a particular dataset was
modelled, but irrelevant for the computer doing the math.






-- 
Best,
S.


Re: Start of the #ontology sub-group #ontology

Massimo Pizzol
 

Ok thanks so this is the solution I was asking for. We can separate between technosphere and rest by using an external list of names. And yes you are right about the matrix operation it will work even if the order of columns and rows is not the same. So we neither need the ref flow predicate nor any product subclass in the ontology.
Massimo

On 20 Mar 2019, at 15.29, Chris Mutel via Groups.Io <cmutel=gmail.com@groups.io> wrote:

On Wed, 20 Mar 2019 at 12:30, Massimo Pizzol <massimo@...> wrote:
“product”, “emission”, etc. are subjective.

Agree, and formalizing them limits our flexibility. But indeed some of those might be useful to work in LCA context. I think that the only two pieces of information we actually need for doing LCA are: if a flow belongs to the technosphere (all the rest is B matrix) and if a flow is a reference flow (diagonal of tech matrix). Right now I can’t think of any automatic way of determining this information from a raw list of inputs and outputs. So we have to include this info in the ontology because we can’t use an algorithm or write a code to figure this out. But perhaps I am wrong and somebody in the group has a solution for this and then we can skip these classifications altogether, that would be perfect. I also recognize that this means introducing some subjective elements in the model, because who decides what is technosphere? But as I wrote before if we want to use the liked data for LCA we have to accept that there is an LCA framework.
This is a great comment, and is to me a perfect example of how
people's experience leads them to accept restraints without even
realizing it.

1. Mathematically, we don't need to distinguish between technosphere
and biosphere, this can be one big matrix. In practical terms, our
biosphere will be a different set of names; or, they will be flows for
which there is no associated producing activity.

2. We don't need the concept of a reference flow to make a
technosphere matrix, and there isn't anything special about positive
numbers of the diagonal. Production amounts can be randomly ordered,
and in any case everything produced is positive, everything consumed
is negative, regardless of whether it is a reference product,
co-product, or whatever. The notion of reference product is helpful
for humans trying to understand the reason a particular dataset was
modelled, but irrelevant for the computer doing the math.



Re: Start of the #ontology sub-group #ontology

 

On Wed, 20 Mar 2019 at 12:30, Massimo Pizzol <massimo@...> wrote:
“product”, “emission”, etc. are subjective.

Agree, and formalizing them limits our flexibility. But indeed some of those might be useful to work in LCA context. I think that the only two pieces of information we actually need for doing LCA are: if a flow belongs to the technosphere (all the rest is B matrix) and if a flow is a reference flow (diagonal of tech matrix). Right now I can’t think of any automatic way of determining this information from a raw list of inputs and outputs. So we have to include this info in the ontology because we can’t use an algorithm or write a code to figure this out. But perhaps I am wrong and somebody in the group has a solution for this and then we can skip these classifications altogether, that would be perfect. I also recognize that this means introducing some subjective elements in the model, because who decides what is technosphere? But as I wrote before if we want to use the liked data for LCA we have to accept that there is an LCA framework.
This is a great comment, and is to me a perfect example of how
people's experience leads them to accept restraints without even
realizing it.

1. Mathematically, we don't need to distinguish between technosphere
and biosphere, this can be one big matrix. In practical terms, our
biosphere will be a different set of names; or, they will be flows for
which there is no associated producing activity.

2. We don't need the concept of a reference flow to make a
technosphere matrix, and there isn't anything special about positive
numbers of the diagonal. Production amounts can be randomly ordered,
and in any case everything produced is positive, everything consumed
is negative, regardless of whether it is a reference product,
co-product, or whatever. The notion of reference product is helpful
for humans trying to understand the reason a particular dataset was
modelled, but irrelevant for the computer doing the math.


Re: #bentso Bentso model development plans #bentso

 

On Wed, 20 Mar 2019 at 13:37, <miguel.astudillo@...> wrote:

Hello all

It is not very clear to me how the bentso model is interacting with other pieces of the puzzle.

For example, if we would like to modify the capacity factors of power plants of European countries in Exiobase based on bentso. Would that be possible ?
EXIOBASE doesn't really have capacity factors, as in general it does a
poor job of modelling capital expenditures. But we could use this to
update the relative shares of the different technologies supplying the
electricity grid.

Is it the objective to test how to add new sources of data to an existing database levering the advantage of being stored using the same schema?
Partially, and also partially just to illustrate a different mental
model of LCI, where we can include something more than lists of inputs
and outputs, as well as an example of transparency, software
development methods, etc.


Action required: Hackathon final planning

 

Dear all-

The hackathon is coming soon, and there are still a few things we need to lock down. Please read the following carefully. If you come to Barcelona unprepared, we will be sad...

1. Please read through the agenda, and prepare any input assigned to you on Monday morning. We can adjust the agenda as needed, but you can also suggest changes via email or PR (Bo, could you please add the Zoom meeting links to the agenda?).

2. We will use https://github.com/orgs/BONSAMURAIS/projects/2 as the hackathon project board - a central location to see how each working group is progressing. Working group coordinators, please add your deliverables ASAP. To add a list of specific tasks, use this formatting: "(space)-(space)[(space)](space)Description of task". The project board will then show your progress on this issue. Feel free to email me if you have questions.

3. Similarly, please add your working group to https://github.com/BONSAMURAIS/hackathon-2019/blob/master/README.md following the example of other working groups.

The hackathon starts with a telemeeting at 9:00 (Barcelona time) on Monday, March 25. If needed, you can call me at +41 76 47 42 459, or Bo at +45 212 32 948.


Re: #correspondencetables - what needs to be done? #correspondencetables

Tiago Morais
 

Hi,

I just uploaded the correspondence table between EXIOBASE and US EPA.
Nevertheless, there are flows in the EXIOBASE that don't have correspondence in US EPA flows (they just a few cases)

Cheers
Tiago


Re: #bentso Bentso model development plans #bentso

Miguel Fernández Astudillo
 

Hello all

 

It is not very clear to me how the bentso model is interacting with other pieces of the puzzle.

 

For example, if we would like to modify the capacity factors of power plants of European countries in Exiobase based on bentso. Would that be possible ?

 

Is it the objective to test how to add new sources of data to an existing database levering the advantage of being stored using the same schema?

 

Best,

 

Miguel (Astudillo)

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Elias Sebastian Azzi
Sent: 07 March 2019 01:41
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #bentso Bentso model development plans

 

Account created, here https://transparency.entsoe.eu/homepageLogin
And API key requested by mail.

This done; I am not sure to be able to contribute more than that for now.


Re: Start of the #ontology sub-group #ontology

Massimo Pizzol
 

Thanks Elias for the interesting reflections. I believe all your points are related. My impression is that we are converging towards an ontology that is operational with a minimal number of elements and can potentially be expanded with additional layers for specific uses (e.g. LCA).

 

  • the input and output subclasses allow us to work with raw (unlinked) data.

After my short chat with Matteo, I understand that even if redundant these subclasses are a more elegant way (semantically speaking) to structure our ontology because they allow us to “get the answer we want by making the right question”. We can get the same answer indirectly but this approach is less elegant (and since I am Italian, for me elegance is everything…). So it might be actually advantageous to keep them.

 

  • “product”, “emission”, etc. are subjective.

Agree, and formalizing them limits our flexibility. But indeed some of those might be useful to work in LCA context. I think that the only two pieces of information we actually need for doing LCA are: if a flow belongs to the technosphere (all the rest is B matrix) and if a flow is a reference flow (diagonal of tech matrix). Right now I can’t think of any automatic way of determining this information from a raw list of inputs and outputs. So we have to include this info in the ontology because we can’t use an algorithm or write a code to figure this out. But perhaps I am wrong and somebody in the group has a solution for this and then we can skip these classifications altogether, that would be perfect. I also recognize that this means introducing some subjective elements in the model, because who decides what is technosphere? But as I wrote before if we want to use the liked data for LCA we have to accept that there is an LCA framework.

 

Looking forward to the meeting on Friday.

BR
Massimo

p.s. +1 for “ALPHA”, ”BRAVO”, “CHARLY”. I am having some good laughs thinking about “Hot shots!” right now

 

_._,_._,_


Re: #softwaremethods Python library skeleton #softwaremethods

 

This is better directed to #reproduciblemodels - this working group is
a subtask of that one, mostly just to get a solid technical foundation
before the hackathon.

On Wed, 20 Mar 2019 at 09:19, <miguel.astudillo@...> wrote:

Dear all



It may be worth adding a couple of references to explain why doing all this version control, documentation, and testing is important. For people who are well-versed in software development this may sound obvious but for many others this is totally new. Maybe some of this should go on the “getting started” guidelines.



Specifically, I was thinking about some papers like:



Ten simple rules for making research software more robust https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005412



Best practices for scientific computing https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001745



Lifting Industrial Ecology Modeling to a New Level of Quality and Transparency: A Call for More Transparent Publications and a Collaborative Open Source Software Framework https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.12316



I’ve just seen this intro to version control https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668.



Best,



Miguel (Astudillo)





From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Stefano Merciai
Sent: 15 March 2019 16:02
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #softwaremethods Python library skeleton



Dear Brandon and Tomas,

I am sorry but I have withdrawn from the group to better focus on other issues.

Best,

SM



On 14/03/2019 23:28, Chris Mutel wrote:

Dear Brandon, Stefano, and Tomas:

As I did not see much movement from your working group, I have done the following:

Filled out the python library skeleton. You can see what needs to be completed before the hackathon here.
Added a description of your group to the hackathon readme.
Added cards on the two deliverables to the hackathon project board.

Please follow up and complete these deliverables, as people will be reliant on them from the start of the hackathon.



--

Best,

S.

--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################


Re: #softwaremethods Python library skeleton #softwaremethods

Miguel Fernández Astudillo
 

Dear all

 

It may be worth adding a couple of references to explain why doing all this version control, documentation, and testing is important. For people who are well-versed in software development this may sound obvious but for many others this is totally new. Maybe some of this should go on the “getting started” guidelines.  

 

Specifically, I was thinking about some papers like:

 

 

 

 

I’ve just seen this intro to version control https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668.

 

Best,

 

Miguel (Astudillo)

 

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Stefano Merciai
Sent: 15 March 2019 16:02
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #softwaremethods Python library skeleton

 

Dear Brandon and Tomas,

I am sorry but I have withdrawn from the group to better focus on other issues.

Best,

SM

 

On 14/03/2019 23:28, Chris Mutel wrote:

Dear Brandon, Stefano, and Tomas:

As I did not see much movement from your working group, I have done the following:

Please follow up and complete these deliverables, as people will be reliant on them from the start of the hackathon.



-- 
Best,
S.


Last chance today to vote for your preference for social event #physical

Bo Weidema
 

Today is last chance to give your preference order for the type of social event at https://github.com/BONSAMURAIS/hackathon-2019/blob/master/Participants.md

 

1. Inside Sagrada Familia (https://en.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia)

2. Barcelona Top View

Visit to Park Güell (not the closed part) and other places with magnificent views of Barcelona. 

3. “Orienteering” with Quiz – around the old town of Barcelona

4. El Born, Barceloneta, and a walk along the beaches of Barcelona

 

For some entry tickets, the age matters (below or above 30 years) as well as if you have a student card, so please indicate like in this example: 

 


Re: Start of the #ontology sub-group #ontology

Elias Sebastian Azzi
 
Edited

Hello,
Finally have time for some inputs to your extensive discussions and nice summaries.

ALPHA/ Arguments to NOT introduce subclasses like “product”, “emission”, and “waste”.
I add to the list of arguments:
Products (goods or services), emissions, or waste are terms that embody value judgements.
Examples:
(i) CO2 is mostly seen as an emission to the atmosphere, but some describe it as a waste of our industrial activities (for which we could also provide treatment, direct air capture, or carbon capture and storage).
(ii) In circular economy, waste becomes a resource; zero-waste people would say there is no waste only resources.
(iii) The boundary between waste (paying for a treatment service) and by-product (getting paid for the material) can vary with markets/supply/demand changes.

So, the physical raw fact is that things go in and out of an activity (i.e. metabolism); and we (or I) think that this is what must be stored in the database (pure accounting).
The value judgment can come as a second layer, when using the database for life cycle impact assessments. Basically, an impact assessment method is a set of value judgments that gives us characterization factors. The Bonsai implementation of GWP100 will take all flow instances of (fossil) CO2 from any activity in the life cycle to the atmosphere and sum them up.

References for the BEP:
Weidema, B. P.; Schmidt, J.; Fantke, P.; Pauliuk, S. On the Boundary between Economy and Environment in Life Cycle Assessment. Int. J. Life Cycle Assess. 2018, 23 (9), 1839–1846; DOI 10.1007/s11367-017-1398-4.

BRAVO/ Use of subclasses for input and output VS Use of predicates isInputOf and isOutputOf
From the previous meeting, I understood that the discussion raised by Massimo depends on which database we are talking about: unlinked or linked database?
I wrote this after our meeting, to clarify my understanding:

Note: the use of "linked" in this section does not refer to the "LinkedData" concept.
The database exist in multiple versions: an unlinked version (raw data) and linked versions (different linking assumptions lead to different versions, e.g. attributional, consequential).
The unlinked version represents the way data are collected in practice, the way data are available. Data are collected for each activity: what are the inputs of activity X and what are the outputs of activity X. The unlinked database does not allow to say where the output of activity X goes. In practice, unless the supply chain is known in detail (note: in LCA, knowing fully the supply chain is nearly impossible), the destinations of outputs and the provenance of inputs are not known.
This information gap is solved in linked versions of the database by using linking assumptions. These linking assumptions can vary in algorithm complexity, have different interpretations and value judgments.

In a linked database, a flow-instance is output of a single activity and is input to a single activity.
In an unlinked database, a flow-instance is either input or output of an activity.
[I feel I could be heavily wrong on that statement; and also feel that predicates are interesting]

CHARLY/ Validation, agency and social LCA in the ontology
Bo and Chris mentioned how external data can be used for validation of the BONSAI database; e.g. with GDP data from the World Bank. This echoes to https://chris.mutel.org/next-steps.html#id1 but also extends to other options for validation (e.g. remote sensing data for land use change; anthropogenic emissions). 
Conceptual difference [do you agree with it?]
>> GDP value by World Bank & GDP value by Bonsai are somehow based on the same raw data (though not easily accessible)
>> Remote sensing data for NOx emissions vs Bonsai data for NOx emissions are not of the same type of raw data; different measurement techniques/reporting frameworks
This said, for the purpose of the hackathon, GDP validation is enough to implement.

The GDP example raised the question: how to include agents in the ontology. This sounds also important for social LCA (Is the social LCA database on our list of data sources?).
Agents are complex. The first terms I think of are: Companies; Governments; Individuals; Employees; Households; Multinationals; Teams.
I have failed (tonight) in finding an existing ontology of agents; but I am sure it exists

With our focus on "activities", the first predicate I can think of is "isPerformedBy" an agent or set of agents. But then, it gets blurry / not easy to generalise.
Example:
Activity = "Electricity production" #isPerformedBy Agent = "Coal power plant nb 1234"
Agent = "Coal power plant nb 1234" #isLocatedIn Country = "Germany", #isOwnedBy Entity = "InternationalPowerCompany" (at 60%), #isOwnedBy Entity= "City of Dusseldorf" (at 40%)
Agent = "Coal power plant nb 1234"  #hasWorkers literal = "70"
Company = "InternationalPowerCompany" #hasHighSkilledEmployees literal = "2000"
Company = "InternationalPowerCompany" #hasMediumSkilledEmployees literal = "5000"
...
Issues:
- ownership of a plant by several entities
- entities, companies, being multinational, much larger than the plant
- workers not the same as employee; working somewhere; employed by someone? Can be employed by a company but work in several places/plants. 

Simplification: only have population/agent data for "super classes" that aggregate at a sector level or country level? as in Exiobase; issues: how to deal with multinational entities; companies; workers?

DELTA/ How do we make the ontology/database usable for LCA-people if it does not have LCA-specific information in it?
Massimo asked this question.
If not directly included, I am guessing that the ontology/database becomes usable for LCA-people (or other type of people) via some additional layers.
For impact assessments, see reply to ALPHA.
For knowing the reference flow of an activity, I think that this is solved in the linked databases (linked as in BRAVO, not LinkedData); but if you work with the raw unlinked data, you have to make the assumption anyway-

Besides, my (very) long-term vision with Bonsai is to advance further in merging Industrial ecology methods: LCA, MFA, and even IOA, IAM, all forms of socioeconomic metabolism analysis. Including bridges with dynamic system modelling and complex system modelling.

 


Re: Competency Questions #ontology #rdf

Matteo Lissandrini (AAU)
 

Thanks Bo for the clarification

the concept of macroeconomic scenario which is not present in the data
I've seen.
Please see the BONSAI glossary:
https://github.com/BONSAMURAIS/bonsai/wiki/Glossary

I've seen this, my concern is that the EXIOBASE data does not contain any explicit information about this, right?
So this information should come from somewhere else(?) and how this is represented in the database should be defined.
At the moment my best guess is to have a "named graph" with associated metadata (I can explain better in the call).


That the coal C4 from CM2 is actually used as the input coal C1 for SP1.
Yes, there are in fact (at least) two different instances of the database:

- one before linking, in which flows are only recorded as being either
inputs to or outputs from an activity

- one after linking (which implies the application of the algorithms of
a specific system model, as described by Chris), where each flow is
recorded as a flow between two specific activities

The ontology can (or should be able to) handle both these instances.

I think I understand this. Again this information may come from different sources (e.g., the algorithms).
We need to extend the model so this is represented correctly.

Also, I had a quick chat with Massimo so that I could better understand the technicality of input & output and how they are read.
I think this is in line to this two versions of the data as referred by Bo.

In general, I agree that the two classes (input/output) are redundant in a sense.
We could remove the classes and keep only the predicates and we will not lose information.
Note that the two classes are to distinguish the Flow, not the Flow-object (or the object that flows).
So the object that flows can flow out from an activity and into another one (here are the two flows).

The reason of the two classes is to help formulating the right query: if you ask if a flow is an input or an output, having the two classes you can ask exactly that, without the two classes you need to ask : is there an activity this flow is an input of?
You still get the same answer, but you need a different question.





Best,
Matteo



---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo


Exiobase v3.3.17

Stefano Merciai
 

Hi all,

I just want to let you know that the latest version of Exiobase multi-regional hybrid tables, i.e. v3.3.17, is uploaded on exiobase.eu

I have changed the labels of the HIOT product classification in order to improve the consistency with HSUTs. I will soon upload the correspondence table on Github. Unless big issues, the final format will be that one. Then, if you want I can add other data.

I will upload a matrix of prices soon after the hackaton.

Best,

Stefano


Re: #correspondencetables - what needs to be done? #correspondencetables

Stefano Merciai
 

Hi all,

I just want to let you know that the latest version of Exiobase multi-regional hybrid tables, i.e. v3.3.17, is on exiobase.eu

I have changed the labels of the HIOT product classification in order to improve the consistency with HSUTs. I will soon upload the correspondence table on Github. Unless big issues, the final format should be that one. Then, if you want, I can add other data.

Best,

Stefano


On 15/03/2019 14:38, Tiago Morais wrote:

Hi all,

 

I already finished the correspondence between v3 and v2 of exiobase, but I’m not authorize to upload in the github. Thus, I attached the file year.

 

Meanwhile, I will also start to work on the point 3 from Chris’s list.


-- 
Best,
S.


your preference for social event #physical

Bo Weidema
 

Hi,

For those of you who are physically present in Barcelona:

We have scheduled Wednesday afternoon off for a social event.

To choose the type of event that will delight the most of you, please edit your entry in https://github.com/BONSAMURAIS/hackathon-2019/blob/master/Participants.md

 indicating after your name these 4 options in preference order: 

1. Inside Sagrada Familia (https://en.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia)

 

2. Barcelona Top View

Visit to Park Güell (not the closed part) and other places with magnificent views of Barcelona.

 

3. “Orienteering” with Quiz – around the old town of Barcelona

 

4. El Born, Barceloneta, and a walk along the beaches of Barcelona


For some entry tickets, the age matters (below or above 30 years) as well as if you have a student card, so please indicate like in this example:


the sooner the better in order to guarantee reservations.

Thanks

Bo