Date   

Re: #reproduciblemodels working group - getting organized #reproduciblemodels

Brandon Kuczenski
 

Hey folks,
I can probably make 5pm tomorrow (9am tomorrow). 5:30 is also fine but I am sensitive to the hungry kids. I can also do pretty much any time later than 5pm so let me know if something else is better.

Here's a zoom link-

Brandon Kuczenski is inviting you to a scheduled Zoom meeting.


Topic: reproducibility group

Time: Mar 18, 2019 9:00 AM Pacific Time (US and Canada)


Join Zoom Meeting

https://ucsb.zoom.us/j/222361622


One tap mobile

+16699006833,,222361622# US (San Jose)

+16468769923,,222361622# US (New York)


Dial by your location

+1 669 900 6833 US (San Jose)

+1 646 876 9923 US (New York)

Meeting ID: 222 361 622

Find your local number: https://zoom.us/u/awvzew4J


Join by SIP

222361622@...



On Mon, Mar 18, 2019 at 12:25 AM <miguel.astudillo@...> wrote:

For me 5.30 is fine, let's see if Brandon can make it.

 

Best, Miguel

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Massimo Pizzol
Sent: 15 March 2019 15:37
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #reproduciblemodels working group - getting organized

 

5:30 PM is really a bad time for me (“kids are hungry”-time) but go ahead I’ll do my best and if I can’t join then amen !

 

Massimo

 

 

From: <hackathon2019@bonsai.groups.io> on behalf of "Carlos David Gaete via Groups.Io" <cdgaete@...>
Reply-To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Date: Friday, 15 March 2019 at 15.23
To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Subject: Re: [hackathon2019] #reproduciblemodels working group - getting organized

 

Hi all,

 

It would be best for me next week. Monday, Tuesday...

I think we can meet after  5pm CET so that Brandon can join us. So I propose Monday 5:30pm CET 

Regards

Carlos



--
Brandon Kuczenski, Ph.D.
Associate Researcher

University of California at Santa Barbara
Institute for Social, Behavioral, and Economic Research
Santa Barbara, CA 93106-5131

email: bkuczenski@...


Re: Competency Questions #ontology #rdf

Bo Weidema
 

Dear Miguel,

These are relevant issues. However, for the time being, we have restricted ourselves to data that relect averages with a duration of minimum 1 year. This is because these are the typical data used, and a larger granularity would risk an overcomplication relative to the typical data in use in the domain. Nevertheless, I believe that in the future, it will be relevant to allow more flexibility here, and I think that will also be possible without actually changing the ontology. The current restriction is not ontological, just practical.

Best regards

Bo

Den 2019-03-18 kl. 05.24 skrev mmremolona via Groups.Io:

Hi all,

Sorry for not participating as much the past few weeks. I'm trying to catch up with what everyone has said so far.

In terms of these competency questions. I guess the question that Massimo is asking is with respect to time scales and time windows. I'm not entirely familiar with the dataset that is available in the domain, but these time scales for measurements can cause some incongruity in the representation that is finally done in the ontologies. I'm not sure if the questions I ask are of the type to be included in these competency questions but my opinions are as follows:

(MR_Q1) What is the time granularity of the data that we acquire? This includes flow rates and production statistics. I also assume this varies with the different sources of data. Some data may already be averaged (Do we handle these differently?).
(MR_Q2) Are we going to aggregate data as part of the ontology specification or is this left for other parts of the pipeline? And if we are to aggregate data, to what degree and time scales? (per hour, per day, per week - I think this depends on how often we aggregate data and what data is available, I don't think a per minute data is significant enough in the overall scheme of LCA but I might be wrong) 

As of now, these are the questions that came to my head as I'm reading along the threads in this group. I'll post more ideas as I come across them.

Best,

Miguel Remolona
--


Re: Start of the #ontology sub-group #ontology

Massimo Pizzol
 

Dear Ontology/RDF group

 

We have a meeting Friday and I would like to share some points for discussion.

 

I am thinking a lot about our ontology and there are two pressing issues that I hope we can clarify.

 

  1. The use of “input” and “output” subclasses

 

Bo has suggested this below as arguments to NOT introduce subclasses like “product”, “emission”, and “waste”.

 

>>> Principle: We try to avoid making fixed choices, like sign nomenclatures, that are only useful in specific contexts.

>>> Principle: It is a good practice for a model to stay as close to reality as possible

>>> Principle: Do not introduce unneccesary (obligatory) classifications

 

I agree with these principles and I think it makes sense not to have the subclasses product/waste/etc. My problem is that I don’t see how the choice of using the “input” and “output” subclasses fits with these principles. It is a sign convention, useful in specific context, and it is an obligatory classification. I don’t know if classifying things in “input” and “output” is close to reality more than classifying them in “products”, “emission”, “waste”. Thus, my preference is to remove the “input” and “output” subclasses and keep only the “isInputof” and “isOutputof” predicates.

 

So far the arguments for using the input and output subclasses have been:

 

>>> at the ontology level to restrict the domain of the input and output relationships.

I am not totally clear on what this means. My concern is whether the use of subclasses unnecessarily increases the complexity of the model because – assuming I have understood thing correctly - there would be two instances of e.g. a “coal” flow. One is the “coal input flow” and the other is the “coal output flow” each of them with a different URI. So if you are looking at the instance “electricity production” you will find it is related to a specific URI for coal input,  and if you are looking at the instance of “coal production” you will find it’s related to another different URI for coal output. So the same thing (coal) in the physical reality is now described by two different codes.

>>>Assume you are looking at a specific instance of a 10tonnes of coal in your database, then you ask yourself “is this an input for something or an output of something?”

My view is that in the physical reality 10 kg coal is not the output or the input of something in absolute terms. It is just coal, i.e. an object. The fact that is input or output is determined only in relative terms, i.e. in relation to another object (activity). Coal is output of coal production. Coal is input to electricity production. I would instead ask this type of questions: “Who is this 10 kg of coal associated with?” And what I would expect to find out is that it is the output of a coal production activity and the input of a electricity production activity.

 

>>>for sure you could find the answer by checking  "is this the source of inputOf relationships?",

This sounds really nice IMO! I was thinking this was actually how we should find out about things. I also guess that this is a competency question? I would like to better understand why this is not  sufficiently “operational”.

 

>>>but operationally you can ask "what is the type of this? And in my view this would be the correct way to do this because something is either input or output"

I argued above that in the physical reality something is not either input or output in absolute terms. Anyway, we could certainly ask the question “"what is the type of this?” with referent to whether something is classified as input or output. But if we start asking these type of questions for the input vs output classification, then if we are consistent why not asking the same type of question for each other possible classification? For example: if something is a product exchange? or an environmental exchange? For example I could ask “Is CO2 an emission or a product?” But Bo has argued based on the principles above that this is not a relevant question. So why is the question relevant for input and output?

 

  1. In general I am unclear on how much should we adhere to existing LCA frameworks.

 

>>> LCA people all have their own mental model.

On one hand I agree we should keep an open mind and not be constrained by specific mental models. But on the other end, I also understand that we are doing this for the use of “LCA people” too. I thought one of our purposes was to create an infrastructure to support LCAs (e.g. because by making specific queries one can get LCA datasets). If our purpose is to make an ontology that is valid for all models in all disciplines from economics to environmental sciences, then perhaps the terms “input” and “output” are the most generic ones (can apply to anything from a tree to a whole country economy) and this might be sufficient (preferably as predicates, as I argued above). However, In order to use the linked data to create some LCIs, we would need some ways of separating what is A matrix (products) and what is B matrix (substances, costs, or many other things) , and what is reference flow, because this is what LCA people are used to work with. So perhaps we have to allow for the possibility of identifying this LCA-specific information. With the current ontology the “only” information we can obtain from e.g. the graph of steel production is a list of inputs and outputs. So how do I distinguish if steel is the reference flow of steel production instead of CO2?

 

 

Hope this was useful and I am looking forward to a good discussion on Friday
Massimo

 

 


Re: Start of the #ontology sub-group #ontology

 

Two small things.

1. This discussion of Apache Jena might be interesting for some of you: https://news.ycombinator.com/item?id=19419025

2. Thanks Massimo, I think this is a great format and makes it much easier to follow the train of ideas, especially over multiple days. Let's do this more!


Re: Start of the #ontology sub-group #ontology

Brandon Kuczenski
 

Massimo,
Let me weigh in on the input / output question. In my view, a flow is not an input or an output- it has to be both. It has to be an output from the process that created it and an input from the process that consumed it. The flow is the same in both cases; therefore it is an error to call it one or the other.

I haven't seen the term 'exchange' used very much but in my view, a flow is simply a product/substance/material/service and a quantity of measurement (say, 'mass'). (this has to be fixed in order for the use of many different databases to be stable). I think of an exchange as a 4-tuple: an activity that defines the exchange (which I call the parent), a flow that is being exchanged, a direction with respect to the parent, and a termination, which is the other activity (or compartment or stock or market) that is the partner to the exchange. If the termination is null, then it's a cutoff flow- auditing these flows is part of reviewing a model.

This view is pretty consistent with your discussion about "Who is this 10 kg of coal associated with?"

A characteristic of this definition is that it is non-numeric, i.e. there is no quantitative information- only adjacency. This helps to define the model without getting hung up on what the exchange value or uncertainty is. Obviously there could be uncertainty in the termination - from where / what supplier / what time of day / etc? but that is not quantitative uncertainty.

When the parent activity is invoked as part of a query, it would be "responsible" for "figuring out" the exchange value given the query it is answering, and the termination could / would have to be figured out by the software that is doing the query. But it's the exchange that is directional, not the flow.

I will try to make the call on Friday but I'm not sure what time it is.

-Brandon


--
Brandon Kuczenski, Ph.D.
Associate Researcher

University of California at Santa Barbara
Institute for Social, Behavioral, and Economic Research
Santa Barbara, CA 93106-5131

email: bkuczenski@...


Re: Competency Questions #ontology #rdf

Matteo Lissandrini (AAU)
 

Dear all,

I've collected the discussion and something more re: competency questions in the wiki for the RDF framework repository[1],
this will have to be restructured.
Please feel free to fix any typo or other issue you'll see.
Also, let me know if you add any new competency question.


There are a number of things still open on this. For instance, in the questions come up the concept of macroeconomic scenario which is not present in the data I've seen.
To this, probably (or maybe not, please let me know) connects the issue on Input/Output.

In our last conversation it appeared this very important detail that was missing to me (and so to the modeling):
The EXIOBASE data is a specific static snaptshot, in this data the center is actually the activities and each flow is either input to 1 or output to 1 activity only.
E.g. there is this steel production SP1 that consume some  coal C1 and outputs some steel S1.
Then there is this other steel production SP2 that consume some different coal C2 and outputs some other steel S2.
Then there is this coal mine CM1 that outputs some coal C3.
Then there is this other coal mine CM2 that outputs some coal C4.

When we reason about product footprint, then intervenes some other data/analysis process that links in some way flows, so only at this point we can record:
That the coal C4 from CM2 is actually used as the input coal C1 for SP1.
The coal C3 from CM1 is actually the input coal for SP2.

So C4=C1 -> at the same time output (of CM2) and input (for SP1), is this the issue ?

Yet, there may be cases like the following:
CM1 which outputs C3 actually split 70% in C1 for  SP1 and 30% in C2 for SP2.

Please, let me know if I'm understanding this correctly.



Thanks a lot,
Matteo















[1]https://github.com/BONSAMURAIS/BONSAI-ontology-RDF-framework/wiki/The-BONSAI-Ontology-and-RDF-Framework


Re: Start of the #ontology sub-group #ontology

Massimo Pizzol
 

Thanks Brandon

 

>>> a flow is not an input or an output- it has to be both.

I completely agree, and this is what I was trying to write as well. In my understanding a “flow” object is not an input or output in absolute terms but only in relative terms, i.e. in relation to another “activity” object. Therefore, using the predicates “IsInputof” and “IsOutputof” seems to me an appropriate and sufficient way to express this relationship while I don’t think we should use of the “Input” and “output” subclasses for the reasons previously outlined (not fully correct, redundant, inconsistent).

 

BR
Massimo

 


Re: #reproduciblemodels working group - getting organized #reproduciblemodels

Brandon Kuczenski
 

Hey all,
I created a number of issues in the reproducibility repo, and assigned one to each of you. Let's all try and do a turn on our assigned issue by the end of Wednesday.
Work can be done directly in the issue or through contributions to the repo. I will also try to fill out the written docs.

-Brandon



--
Brandon Kuczenski, Ph.D.
Associate Researcher

University of California at Santa Barbara
Institute for Social, Behavioral, and Economic Research
Santa Barbara, CA 93106-5131

email: bkuczenski@...


Re: #reproduciblemodels working group - getting organized #reproduciblemodels

Massimo Pizzol
 

Sorry guys I missed the call yesterday. I totally messed up with the time zone, I thought it was today, stupid mistake.

I read the minutes and it’s alright will work on the specific task I have been assigned to.

BR
Massimo

 


Re: Competency Questions #ontology #rdf

 

This is a very important question! The following is my opinion, and other might have a different perspective.

Questions over what can really substitute for what (e.g. for coal this is sulfur content, energy density, but also in general lignite takes totally different handling than bituminous) are long known to be difficult questions; similarly, the correct way of modelling markets with multiple providers, trade, and re-export is also tricky. They are tricky because in most cases we have to make value judgments in what we think is the best model, without really being able to get the "right" answer.

As such, these decisions should be done by the system modelling software (see my recent blog post), and should not be addressed in the data format. We want to be able to try multiple approaches, and be able to quantify the effects of different choices. Instead, the data format should be able to represent different kinds of coal, their origin locations, trade patterns (trade is an activity, the same as other activities), and the properties of these coals. The data format can also give the volume of specific kinds of coals consumed by various activities in a region. The system model is responsible for taking this large set of data points, and creating a balanced view of a possible world.


Re: Competency Questions #ontology #rdf

Bo Weidema
 

Den 2019-03-18 kl. 23.35 skrev Matteo Lissandrini (AAU):

the concept of macroeconomic scenario which is not present in the data I've seen.
Please see the BONSAI glossary: https://github.com/BONSAMURAIS/bonsai/wiki/Glossary

In our last conversation it appeared this very important detail that was missing to me (and so to the modeling):
The EXIOBASE data is a specific static snaptshot, in this data the center is actually the activities and each flow is either input to 1 or output to 1 activity only.
When we reason about product footprint, then intervenes some other data/analysis process that links in some way flows, so only at this point we can record:
That the coal C4 from CM2 is actually used as the input coal C1 for SP1.
Yes, there are in fact (at least) two different instances of the database:

- one before linking, in which flows are only recorded as being either inputs to or outputs from an activity

- one after linking (which implies the application of the algorithms of a specific system model, as described by Chris), where each flow is recorded as a flow between two specific activities

The ontology can (or should be able to) handle both these instances.

Bo


Re: #reproduciblemodels working group - getting organized #reproduciblemodels

Miguel Fernández Astudillo
 

Upss… Sorry, me too, I was convinced it was today. Yesterday by that time I was in front of the computer :/.

 

I will try to catch up with the minutes

 

Miguel (F.A.)

 

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Massimo Pizzol
Sent: 19 March 2019 08:08
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #reproduciblemodels working group - getting organized

 

Sorry guys I missed the call yesterday. I totally messed up with the time zone, I thought it was today, stupid mistake.

I read the minutes and it’s alright will work on the specific task I have been assigned to.

BR
Massimo

 


Re: Competency Questions #ontology #rdf

 

On Tue, 19 Mar 2019 at 09:39, Bo Weidema <bo.weidema@bonsai.uno> wrote:

Den 2019-03-18 kl. 23.35 skrev Matteo Lissandrini (AAU):

the concept of macroeconomic scenario which is not present in the data
I've seen.
Please see the BONSAI glossary:
https://github.com/BONSAMURAIS/bonsai/wiki/Glossary

In our last conversation it appeared this very important detail that
was missing to me (and so to the modeling):
The EXIOBASE data is a specific static snaptshot, in this data the
center is actually the activities and each flow is either input to 1
or output to 1 activity only.
When we reason about product footprint, then intervenes some other
data/analysis process that links in some way flows, so only at this
point we can record:
That the coal C4 from CM2 is actually used as the input coal C1 for SP1.
Yes, there are in fact (at least) two different instances of the database:

- one before linking, in which flows are only recorded as being either
inputs to or outputs from an activity

- one after linking (which implies the application of the algorithms of
a specific system model, as described by Chris), where each flow is
recorded as a flow between two specific activities

The ontology can (or should be able to) handle both these instances.
This is, of course, correct, though it does help clarify things for me.

However, perhaps this needs a different conceptual approach? Perhaps
something like a resolved exchange, which has isInputOf *and*
isOutputOf? Probably the language would have to be adjusted. But I
guess we need this for trading activities in any case.

Bo





--
############################
Chris Mutel
Technology Assessment Group, LEA
Paul Scherrer Institut
OHSA D22
5232 Villigen PSI
Switzerland
http://chris.mutel.org
Telefon: +41 56 310 5787
############################


your preference for social event #physical

Bo Weidema
 

Hi,

For those of you who are physically present in Barcelona:

We have scheduled Wednesday afternoon off for a social event.

To choose the type of event that will delight the most of you, please edit your entry in https://github.com/BONSAMURAIS/hackathon-2019/blob/master/Participants.md

 indicating after your name these 4 options in preference order: 

1. Inside Sagrada Familia (https://en.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia)

 

2. Barcelona Top View

Visit to Park Güell (not the closed part) and other places with magnificent views of Barcelona.

 

3. “Orienteering” with Quiz – around the old town of Barcelona

 

4. El Born, Barceloneta, and a walk along the beaches of Barcelona


For some entry tickets, the age matters (below or above 30 years) as well as if you have a student card, so please indicate like in this example:


the sooner the better in order to guarantee reservations.

Thanks

Bo


Re: #correspondencetables - what needs to be done? #correspondencetables

Stefano Merciai
 

Hi all,

I just want to let you know that the latest version of Exiobase multi-regional hybrid tables, i.e. v3.3.17, is on exiobase.eu

I have changed the labels of the HIOT product classification in order to improve the consistency with HSUTs. I will soon upload the correspondence table on Github. Unless big issues, the final format should be that one. Then, if you want, I can add other data.

Best,

Stefano


On 15/03/2019 14:38, Tiago Morais wrote:

Hi all,

 

I already finished the correspondence between v3 and v2 of exiobase, but I’m not authorize to upload in the github. Thus, I attached the file year.

 

Meanwhile, I will also start to work on the point 3 from Chris’s list.


-- 
Best,
S.


Exiobase v3.3.17

Stefano Merciai
 

Hi all,

I just want to let you know that the latest version of Exiobase multi-regional hybrid tables, i.e. v3.3.17, is uploaded on exiobase.eu

I have changed the labels of the HIOT product classification in order to improve the consistency with HSUTs. I will soon upload the correspondence table on Github. Unless big issues, the final format will be that one. Then, if you want I can add other data.

I will upload a matrix of prices soon after the hackaton.

Best,

Stefano


Re: Competency Questions #ontology #rdf

Matteo Lissandrini (AAU)
 

Thanks Bo for the clarification

the concept of macroeconomic scenario which is not present in the data
I've seen.
Please see the BONSAI glossary:
https://github.com/BONSAMURAIS/bonsai/wiki/Glossary

I've seen this, my concern is that the EXIOBASE data does not contain any explicit information about this, right?
So this information should come from somewhere else(?) and how this is represented in the database should be defined.
At the moment my best guess is to have a "named graph" with associated metadata (I can explain better in the call).


That the coal C4 from CM2 is actually used as the input coal C1 for SP1.
Yes, there are in fact (at least) two different instances of the database:

- one before linking, in which flows are only recorded as being either
inputs to or outputs from an activity

- one after linking (which implies the application of the algorithms of
a specific system model, as described by Chris), where each flow is
recorded as a flow between two specific activities

The ontology can (or should be able to) handle both these instances.

I think I understand this. Again this information may come from different sources (e.g., the algorithms).
We need to extend the model so this is represented correctly.

Also, I had a quick chat with Massimo so that I could better understand the technicality of input & output and how they are read.
I think this is in line to this two versions of the data as referred by Bo.

In general, I agree that the two classes (input/output) are redundant in a sense.
We could remove the classes and keep only the predicates and we will not lose information.
Note that the two classes are to distinguish the Flow, not the Flow-object (or the object that flows).
So the object that flows can flow out from an activity and into another one (here are the two flows).

The reason of the two classes is to help formulating the right query: if you ask if a flow is an input or an output, having the two classes you can ask exactly that, without the two classes you need to ask : is there an activity this flow is an input of?
You still get the same answer, but you need a different question.





Best,
Matteo



---
Matteo Lissandrini

Department of Computer Science
Aalborg University

http://people.cs.aau.dk/~matteo


Re: Start of the #ontology sub-group #ontology

Elias Sebastian Azzi
 
Edited

Hello,
Finally have time for some inputs to your extensive discussions and nice summaries.

ALPHA/ Arguments to NOT introduce subclasses like “product”, “emission”, and “waste”.
I add to the list of arguments:
Products (goods or services), emissions, or waste are terms that embody value judgements.
Examples:
(i) CO2 is mostly seen as an emission to the atmosphere, but some describe it as a waste of our industrial activities (for which we could also provide treatment, direct air capture, or carbon capture and storage).
(ii) In circular economy, waste becomes a resource; zero-waste people would say there is no waste only resources.
(iii) The boundary between waste (paying for a treatment service) and by-product (getting paid for the material) can vary with markets/supply/demand changes.

So, the physical raw fact is that things go in and out of an activity (i.e. metabolism); and we (or I) think that this is what must be stored in the database (pure accounting).
The value judgment can come as a second layer, when using the database for life cycle impact assessments. Basically, an impact assessment method is a set of value judgments that gives us characterization factors. The Bonsai implementation of GWP100 will take all flow instances of (fossil) CO2 from any activity in the life cycle to the atmosphere and sum them up.

References for the BEP:
Weidema, B. P.; Schmidt, J.; Fantke, P.; Pauliuk, S. On the Boundary between Economy and Environment in Life Cycle Assessment. Int. J. Life Cycle Assess. 2018, 23 (9), 1839–1846; DOI 10.1007/s11367-017-1398-4.

BRAVO/ Use of subclasses for input and output VS Use of predicates isInputOf and isOutputOf
From the previous meeting, I understood that the discussion raised by Massimo depends on which database we are talking about: unlinked or linked database?
I wrote this after our meeting, to clarify my understanding:

Note: the use of "linked" in this section does not refer to the "LinkedData" concept.
The database exist in multiple versions: an unlinked version (raw data) and linked versions (different linking assumptions lead to different versions, e.g. attributional, consequential).
The unlinked version represents the way data are collected in practice, the way data are available. Data are collected for each activity: what are the inputs of activity X and what are the outputs of activity X. The unlinked database does not allow to say where the output of activity X goes. In practice, unless the supply chain is known in detail (note: in LCA, knowing fully the supply chain is nearly impossible), the destinations of outputs and the provenance of inputs are not known.
This information gap is solved in linked versions of the database by using linking assumptions. These linking assumptions can vary in algorithm complexity, have different interpretations and value judgments.

In a linked database, a flow-instance is output of a single activity and is input to a single activity.
In an unlinked database, a flow-instance is either input or output of an activity.
[I feel I could be heavily wrong on that statement; and also feel that predicates are interesting]

CHARLY/ Validation, agency and social LCA in the ontology
Bo and Chris mentioned how external data can be used for validation of the BONSAI database; e.g. with GDP data from the World Bank. This echoes to https://chris.mutel.org/next-steps.html#id1 but also extends to other options for validation (e.g. remote sensing data for land use change; anthropogenic emissions). 
Conceptual difference [do you agree with it?]
>> GDP value by World Bank & GDP value by Bonsai are somehow based on the same raw data (though not easily accessible)
>> Remote sensing data for NOx emissions vs Bonsai data for NOx emissions are not of the same type of raw data; different measurement techniques/reporting frameworks
This said, for the purpose of the hackathon, GDP validation is enough to implement.

The GDP example raised the question: how to include agents in the ontology. This sounds also important for social LCA (Is the social LCA database on our list of data sources?).
Agents are complex. The first terms I think of are: Companies; Governments; Individuals; Employees; Households; Multinationals; Teams.
I have failed (tonight) in finding an existing ontology of agents; but I am sure it exists

With our focus on "activities", the first predicate I can think of is "isPerformedBy" an agent or set of agents. But then, it gets blurry / not easy to generalise.
Example:
Activity = "Electricity production" #isPerformedBy Agent = "Coal power plant nb 1234"
Agent = "Coal power plant nb 1234" #isLocatedIn Country = "Germany", #isOwnedBy Entity = "InternationalPowerCompany" (at 60%), #isOwnedBy Entity= "City of Dusseldorf" (at 40%)
Agent = "Coal power plant nb 1234"  #hasWorkers literal = "70"
Company = "InternationalPowerCompany" #hasHighSkilledEmployees literal = "2000"
Company = "InternationalPowerCompany" #hasMediumSkilledEmployees literal = "5000"
...
Issues:
- ownership of a plant by several entities
- entities, companies, being multinational, much larger than the plant
- workers not the same as employee; working somewhere; employed by someone? Can be employed by a company but work in several places/plants. 

Simplification: only have population/agent data for "super classes" that aggregate at a sector level or country level? as in Exiobase; issues: how to deal with multinational entities; companies; workers?

DELTA/ How do we make the ontology/database usable for LCA-people if it does not have LCA-specific information in it?
Massimo asked this question.
If not directly included, I am guessing that the ontology/database becomes usable for LCA-people (or other type of people) via some additional layers.
For impact assessments, see reply to ALPHA.
For knowing the reference flow of an activity, I think that this is solved in the linked databases (linked as in BRAVO, not LinkedData); but if you work with the raw unlinked data, you have to make the assumption anyway-

Besides, my (very) long-term vision with Bonsai is to advance further in merging Industrial ecology methods: LCA, MFA, and even IOA, IAM, all forms of socioeconomic metabolism analysis. Including bridges with dynamic system modelling and complex system modelling.

 


Last chance today to vote for your preference for social event #physical

Bo Weidema
 

Today is last chance to give your preference order for the type of social event at https://github.com/BONSAMURAIS/hackathon-2019/blob/master/Participants.md

 

1. Inside Sagrada Familia (https://en.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia)

2. Barcelona Top View

Visit to Park Güell (not the closed part) and other places with magnificent views of Barcelona. 

3. “Orienteering” with Quiz – around the old town of Barcelona

4. El Born, Barceloneta, and a walk along the beaches of Barcelona

 

For some entry tickets, the age matters (below or above 30 years) as well as if you have a student card, so please indicate like in this example: 

 


Re: #softwaremethods Python library skeleton #softwaremethods

Miguel Fernández Astudillo
 

Dear all

 

It may be worth adding a couple of references to explain why doing all this version control, documentation, and testing is important. For people who are well-versed in software development this may sound obvious but for many others this is totally new. Maybe some of this should go on the “getting started” guidelines.  

 

Specifically, I was thinking about some papers like:

 

 

 

 

I’ve just seen this intro to version control https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668.

 

Best,

 

Miguel (Astudillo)

 

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Stefano Merciai
Sent: 15 March 2019 16:02
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] #softwaremethods Python library skeleton

 

Dear Brandon and Tomas,

I am sorry but I have withdrawn from the group to better focus on other issues.

Best,

SM

 

On 14/03/2019 23:28, Chris Mutel wrote:

Dear Brandon, Stefano, and Tomas:

As I did not see much movement from your working group, I have done the following:

Please follow up and complete these deliverables, as people will be reliant on them from the start of the hackathon.



-- 
Best,
S.

161 - 180 of 273