Date   

Re: #evaluation #evaluation

romain
 

Done!

On Apr 02, 2019, at 11:43 PM, Massimo Pizzol <massimo@...> wrote:

Just a reminder about filling the evaluation form. Only 5 responded so far.

 

I guess everybody is taking a breath after the full-immersion of the hackathon…but evaluation is important to improve the process and I believe the organisers would really appreciate a feedback.

 

I will let the form open for responses until Friday April 5th, at 12:00 and then upload the results on this discussion form.

 

BR
Massimo

 

From: <hackathon2019@bonsai.groups.io> on behalf of "Massimo Pizzol via Groups.Io" <massimo@...>
Reply-To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Date: Friday, 29 March 2019 at 14.25
To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Subject: [hackathon2019] #evaluation

 

Dear all

 

Here a simple evaluation form for the Hackathon.

 

It’s anonymous.


BR
Massimo






Re: #evaluation #evaluation

Massimo Pizzol
 

Just a reminder about filling the evaluation form. Only 5 responded so far.

 

I guess everybody is taking a breath after the full-immersion of the hackathon…but evaluation is important to improve the process and I believe the organisers would really appreciate a feedback.

 

I will let the form open for responses until Friday April 5th, at 12:00 and then upload the results on this discussion form.

 

BR
Massimo

 

From: <hackathon2019@bonsai.groups.io> on behalf of "Massimo Pizzol via Groups.Io" <massimo@...>
Reply-To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Date: Friday, 29 March 2019 at 14.25
To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Subject: [hackathon2019] #evaluation

 

Dear all

 

Here a simple evaluation form for the Hackathon.

 

It’s anonymous.


BR
Massimo


Re: Next steps for post-hackathon Bonsai

 

Some thoughts on managing projects/working groups:

New project ideas

We need a place where newcomers can browse outstanding or ongoing projects, to get some inspiration or see whether their skills could be put to use. I created a project board for this: https://github.com/orgs/BONSAMURAIS/projects/3, but I am not sure if this is the best layout. It could be better to have a repo called something like projects, where the issues would track each potential or ongoing project, and we could assign and filter by tags. In this repo, each directory would have a more detailed description of the working group/project, including where to find out more information (if applicable).

Communication

One of the most important things we can do is to better communicate our current working state, and where others can work on specific tasks. Our BEP on communication is here: https://github.com/BONSAMURAIS/enhancements/blob/bep4-communications/beps/0004-bonsai-communication-strategy.md. One of the things this proposes is to transfer the bonsai.uno webopage to a GH repo, and from some research and past experience a static-site generator would work well. The new project board proposes using Jekyll, which makes for trivial hosting on GH pages, and is widely used and supported. It would allow us to use markdown consistently for everything, which is nice (I hear the howls from ReST fanbois - I am one too:). Would someone be interested in dissecting the current site? If you know a bit about HTML and template inheritance, it shouldn't take long (can skip the donation page for now).

One vital thing would be a colorful vertical subway-style map showing how data progresses through the system, and which library is used at each step. Unless you are working with them directly it can be easy to lose track.

BEP voting

The testing strategy for a few BEPs was to test them out at the hackathon. If you are a BEP author or editor, please think about revising your work, or submitting it for a vote.


Re: Call for data: what's missing?

 

1. The problems identified in https://github.com/BONSAMURAIS/bontofrom/issues/9 have not been solved so far, so electricity data can't be uploaded successfully.

2. The RDF repo is updated, everything in there should be fine to import.

3. I don't think exiobase has made any progress since the hackathon, I guess that @tomas would have told us if there was something available. Looks like there is a little activity here: https://github.com/BONSAMURAIS/bontofrom/commits/impl-tables.

I think that everyone slowed down a little after the hackathon; when it was clear that we would need more time, people shifted towards doing things right instead of just getting a first version done.


On Tue, Apr 2, 2019 at 09:33 PM, Carlos David Gaete wrote:
Hi Matteo,
 
here you will find the bentso data:
 
I recommend you to have a look at the following files:
 
bentso_generation_2016_without_subtracting_PHS.csv
bentso_generation_2017_without_subtracting_PHS.csv
bentso_generation_2018_without_subtracting_PHS.csv
trade_2016.csv
trade_2017.csv
trade_2018.csv
 
These files represent annual electricity generated by technology and country, and annual trade of electricity. The title says 'without subtracting phs' that means no optimization has been applied to this data. The data in these files are without any processing (only hourly electricity aggregated to annual), that is to say, collected from ENTSO-e API and organized in the required format for RDF.
 
Please, make a double check with Chris, since he has been also working on this.
 
Best,
Carlos
 

On Tue, 2 Apr 2019 at 20:58, Matteo Lissandrini (AAU) <matteo@...> wrote:
Hi all,
I would like to try and import all the data in the triple-store in their dedicated graphs.
I will for sure start with the contents of the /rdf repository ( have you all updated the metadata at the beginning of the file? If you are not sure, let me know and I'll help you)

I was trying to understand what other pieces are missing. Is the bentso? Or the arborist? What about exiobase data?
This could also be a good task for testing the "reproducibility" of the work.

Please let me know what are your thoughts.

Thanks,
Matteo

 

 


Re: Call for data: what's missing?

Carlos David Gaete <cdgaete@...>
 

Hi Matteo,

here you will find the bentso data:

I recommend you to have a look at the following files:

bentso_generation_2016_without_subtracting_PHS.csv
bentso_generation_2017_without_subtracting_PHS.csv
bentso_generation_2018_without_subtracting_PHS.csv
trade_2016.csv
trade_2017.csv
trade_2018.csv

These files represent annual electricity generated by technology and country, and annual trade of electricity. The title says 'without subtracting phs' that means no optimization has been applied to this data. The data in these files are without any processing (only hourly electricity aggregated to annual), that is to say, collected from ENTSO-e API and organized in the required format for RDF.

Please, make a double check with Chris, since he has been also working on this.

Best,
Carlos


On Tue, 2 Apr 2019 at 20:58, Matteo Lissandrini (AAU) <matteo@...> wrote:
Hi all,
I would like to try and import all the data in the triple-store in their dedicated graphs.
I will for sure start with the contents of the /rdf repository ( have you all updated the metadata at the beginning of the file? If you are not sure, let me know and I'll help you)

I was trying to understand what other pieces are missing. Is the bentso? Or the arborist? What about exiobase data?
This could also be a good task for testing the "reproducibility" of the work.

Please let me know what are your thoughts.

Thanks,
Matteo


Call for data: what's missing?

Matteo Lissandrini (AAU)
 

Hi all,
I would like to try and import all the data in the triple-store in their dedicated graphs.
I will for sure start with the contents of the /rdf repository ( have you all updated the metadata at the beginning of the file? If you are not sure, let me know and I'll help you)

I was trying to understand what other pieces are missing. Is the bentso? Or the arborist? What about exiobase data?
This could also be a good task for testing the "reproducibility" of the work.

Please let me know what are your thoughts.

Thanks,
Matteo


Re: #correspondencetables - Getting to 1.0 #correspondencetables

Miguel Fernández Astudillo
 

Dear all, my replies inline

 

Best, Miguel

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of Chris Mutel
Sent: 01 April 2019 12:54
To: hackathon2019@bonsai.groups.io
Subject: [hackathon2019] #correspondencetables - Getting to 1.0

 

Dear all-

I am happy that there are a number of people participating here, and I think we have everything ready for assembly into a 1.0 version of this package. However, from reading these emails and looking at the repo itself, it seems like a little organization and goal-setting could help move this project forward. Here are some suggestions:

1. The goal and capabilities (user stories) for 1.0 should be clearly defined. Some possibilities:

 

I agree that it does need cleaning and more clear goal setting. I actually opened an issue (#21) about this and suggest to move some of the content to other repos.

 


- Python package that provides for trivial application of correspondence tables. As a BONSAI user, I want to be able to call `correspondence(data, field_identifier, table_name, aggregation_func, disaggregation_func)` and get my `data` updated automatically.

 

Is it the idea to use these functions later in the arborist repo to generate the “rdf” files?


- All output correspondence data should be provided in a 3-column format, with the third column being the SKOS verb.

Maybe a fourth column is needed for dis/aggregation weights.
- All output correspondence data should have metadata in DataPackage form

 

That makes a lot of sense to me, although we may need some help to choose the predicates.


1.5 If a system uses multiple identifiers (e.g. exiobase), all identifiers should be in their own columns, as at some point each one will be needed.

2. This should be a python package based on the python skeleton. Being a python package would provide structure so that people would know what goes where. However, not every directory would need to be included in the python library itself. Instead, you could have this structure:

correspondence (python library code here)
    python code to do matching

 

I’m not sure what is expected here. We can discuss the details.


    output
        csv and json files
        autogenerated index.html which lists all files and their descriptions
raw (input data in original downloaded form)
 


Of course, other models are possible...

3. The RDF vocabulary terms needed should be identified and documented in the README

 

Do we limit it to the predicates?

4. RDF terms should be computed automatically from the correspondence tables, perhaps with a bit of manual intervention. The default should probably not be an exact match, but this would be configurable. In general it should be possible to map N-1 relations with one term, 1-1 with another, etc. without having to have a person go through long lists.

I would be happy to help with specific technical implementations of any of these tasks.

Who is now coordinating this working group? Could you please update issue #3 to show the current status and short-term plans?


Re: #bentso Next steps for pumped hydro model #bentso

Carlos David Gaete <cdgaete@...>
 

Hi Chris,

It sounds like a good idea, I will do it during the week, as well as organizing and tidying up the bentso model. The iterator gets stuck when ENTSO-e API does not have data available for some countries.

On Mon, 1 Apr 2019 at 13:08, Chris Mutel <cmutel@...> wrote:
(This message is mostly for Carlos :)

I think it would make sense to separate the pumped hydro optimization model from the `bentso` code, as they are really two libraries aimed at two different things. Probably you should copy the python skeleton framework, and then commit something to your own personal repository (as this is your own work).

I think it would also make sense to invest a bit in validating the model. There are obviously a lot of people working in this area (e.g. https://www.sciencedirect.com/journal/energy-procedia/vol/87/suppl/C), though I don't know if there are any good open-source models, and I haven't found any validation data. You should reach out to the open energy modelling folk (http://openmod-initiative.org/), this is their bread and butter.


#bentso Next steps for pumped hydro model #bentso

 

(This message is mostly for Carlos :)

I think it would make sense to separate the pumped hydro optimization model from the `bentso` code, as they are really two libraries aimed at two different things. Probably you should copy the python skeleton framework, and then commit something to your own personal repository (as this is your own work).

I think it would also make sense to invest a bit in validating the model. There are obviously a lot of people working in this area (e.g. https://www.sciencedirect.com/journal/energy-procedia/vol/87/suppl/C), though I don't know if there are any good open-source models, and I haven't found any validation data. You should reach out to the open energy modelling folk (http://openmod-initiative.org/), this is their bread and butter.


#correspondencetables - Getting to 1.0 #correspondencetables

 

Dear all-

I am happy that there are a number of people participating here, and I think we have everything ready for assembly into a 1.0 version of this package. However, from reading these emails and looking at the repo itself, it seems like a little organization and goal-setting could help move this project forward. Here are some suggestions:

1. The goal and capabilities (user stories) for 1.0 should be clearly defined. Some possibilities:
- Python package that provides for trivial application of correspondence tables. As a BONSAI user, I want to be able to call `correspondence(data, field_identifier, table_name, aggregation_func, disaggregation_func)` and get my `data` updated automatically.
- All output correspondence data should be provided in a 3-column format, with the third column being the SKOS verb. Maybe a fourth column is needed for dis/aggregation weights.
- All output correspondence data should have metadata in DataPackage form

1.5 If a system uses multiple identifiers (e.g. exiobase), all identifiers should be in their own columns, as at some point each one will be needed.

2. This should be a python package based on the python skeleton. Being a python package would provide structure so that people would know what goes where. However, not every directory would need to be included in the python library itself. Instead, you could have this structure:

correspondence (python library code here)
    python code to do matching
    output
        csv and json files
        autogenerated index.html which lists all files and their descriptions
raw (input data in original downloaded form)

Of course, other models are possible...

3. The RDF vocabulary terms needed should be identified and documented in the README

4. RDF terms should be computed automatically from the correspondence tables, perhaps with a bit of manual intervention. The default should probably not be an exact match, but this would be configurable. In general it should be possible to map N-1 relations with one term, 1-1 with another, etc. without having to have a person go through long lists.

I would be happy to help with specific technical implementations of any of these tasks.

Who is now coordinating this working group? Could you please update issue #3 to show the current status and short-term plans?


Re: News and updates #correspondencetables

Miguel Fernández Astudillo
 

Nice

 

I’ve created an issue on the repo tagged as an enhancement.

https://github.com/BONSAMURAIS/Correspondence-tables/issues/22

 

I also asked them for the correspondence table from SIEC to CPC and SIEC to HS, which according to their own reports, should be there.

 

best, Miguel

 

PS:

SIEC: standard international energy product classification

HS: harmonised commodity description and coding system

CPC: central product classification

 

 

From: hackathon2019@bonsai.groups.io <hackathon2019@bonsai.groups.io> On Behalf Of michele.derosa@...
Sent: 01 April 2019 11:05
To: hackathon2019@bonsai.groups.io
Subject: [hackathon2019] News and updates #correspondencetables

 

Dear all,

I have received an update form the UNSD Statistical Classification office. This was the source of most of the tables that we currently have on GitHub. These were then removed from UNSD webpage due to an internal major restructure within the UNSD. The individual components of the UNSD classifications program are now decentralized and assigned to the substantive Branches.

UNSD is now upgrading the classifications website. Although the project is large and not yet completed, they are currently reviewing the economic classifications page to ensure that all the economic classification products and correspondence tables are correct and available on the website in a machine readable versions.

These could be a valid source for updated correspondence tables in the future, if required. As you can see, they now have an overview of the available correspondence tables and their format in a matrix-like format, just as the one we previously developed internally (and asked them to develop).

Michele

 


News and updates #correspondencetables

Michele De Rosa
 

Dear all,

I have received an update form the UNSD Statistical Classification office. This was the source of most of the tables that we currently have on GitHub. These were then removed from UNSD webpage due to an internal major restructure within the UNSD. The individual components of the UNSD classifications program are now decentralized and assigned to the substantive Branches.

UNSD is now upgrading the classifications website. Although the project is large and not yet completed, they are currently reviewing the economic classifications page to ensure that all the economic classification products and correspondence tables are correct and available on the website in a machine readable versions.

These could be a valid source for updated correspondence tables in the future, if required. As you can see, they now have an overview of the available correspondence tables and their format in a matrix-like format, just as the one we previously developed internally (and asked them to develop).

Michele

 


Re: Post-hackathon ontology group #ontology #followup

Matteo Lissandrini (AAU)
 

I totally feel the same.
I always publish pre-prints on my personal page (not arxiv, I have issues with the use of arxiv nowadays)
and usually the confs/journal I submit to have open proceedings.

Cheers,
Matteo


From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Massimo Pizzol via Groups.Io [massimo@...]
Sent: Sunday, March 31, 2019 12:37 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

Thanks Chris

I don’t have any problem in publishing a preprint on archiv.org or similar open for comments, and prior to submission to a journal. But until this preprint is ready I am not comfortable in having a publicly accessible working paper, so my preference is still for a google docs with contributors only.

 

Massimo

 

From: <hackathon2019@bonsai.groups.io> on behalf of "Chris Mutel via Groups.Io" <cmutel@...>
Reply-To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Date: Sunday, 31 March 2019 at 11.20
To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

 

On Sat, 30 Mar 2019 at 13:29, Massimo Pizzol <massimo@...> wrote:

I would prefer using Goole docs for a draft. if this is to become a sci paper it can't be a public document but it should be accessible to contributors only and via invitation.

 

Sorry to post something somewhat off-topic, but this statement is not

correct - peer-reviewed publications can absolutely be developed in

the open, and even some vintage pay-for-access journals such as ES&T

will allow you to publish pre-prints on e.g. archiv.org

paper would be the supporting documentation for the choices and use of

the ontology, I would strongly recommend that it a) be developed in

the open, so that people can read up on its use, and any new changes

that you might make, and b) be submitted to an truly open access

journal such as ERL or PLOS One.

 

 

 


Re: Post-hackathon ontology group #ontology #followup

Massimo Pizzol
 

Thanks Chris

I don’t have any problem in publishing a preprint on archiv.org or similar open for comments, and prior to submission to a journal. But until this preprint is ready I am not comfortable in having a publicly accessible working paper, so my preference is still for a google docs with contributors only.

 

Massimo

 

From: <hackathon2019@bonsai.groups.io> on behalf of "Chris Mutel via Groups.Io" <cmutel@...>
Reply-To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Date: Sunday, 31 March 2019 at 11.20
To: "hackathon2019@bonsai.groups.io" <hackathon2019@bonsai.groups.io>
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

 

On Sat, 30 Mar 2019 at 13:29, Massimo Pizzol <massimo@...> wrote:

I would prefer using Goole docs for a draft. if this is to become a sci paper it can't be a public document but it should be accessible to contributors only and via invitation.

 

Sorry to post something somewhat off-topic, but this statement is not

correct - peer-reviewed publications can absolutely be developed in

the open, and even some vintage pay-for-access journals such as ES&T

will allow you to publish pre-prints on e.g. archiv.org

paper would be the supporting documentation for the choices and use of

the ontology, I would strongly recommend that it a) be developed in

the open, so that people can read up on its use, and any new changes

that you might make, and b) be submitted to an truly open access

journal such as ERL or PLOS One.

 

 

 


Re: Post-hackathon ontology group #ontology #followup

 

On Sat, 30 Mar 2019 at 13:29, Massimo Pizzol <massimo@...> wrote:
I would prefer using Goole docs for a draft. if this is to become a sci paper it can't be a public document but it should be accessible to contributors only and via invitation.
Sorry to post something somewhat off-topic, but this statement is not
correct - peer-reviewed publications can absolutely be developed in
the open, and even some vintage pay-for-access journals such as ES&T
will allow you to publish pre-prints on e.g. archiv.org
(https://pubs.acs.org/page/esthag/submission/prior.html). As this
paper would be the supporting documentation for the choices and use of
the ontology, I would strongly recommend that it a) be developed in
the open, so that people can read up on its use, and any new changes
that you might make, and b) be submitted to an truly open access
journal such as ERL or PLOS One.


Re: Post-hackathon ontology group #ontology #followup

Matteo Lissandrini (AAU)
 

Thanks  Elias,
I've added more info to the GDoc.

Enjoy the weather, hoping to meet soon :)

Cheers,
Matteo



From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Elias Sebastian Azzi via Groups.Io [eazzi@...]
Sent: Saturday, March 30, 2019 4:33 PM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

Hello,

Thanks for your input.

* You should have received an invitation, Google Drive file. Overleaf is interesting indeed; if a majority wants it, we can switch.

 

* Web Conference and paper format: to be settled collectively; next Zoom meeting is on Friday, right?

 

* Regarding, Bo's point on product footprint; then should add also background on the EU PEF that is soon to be released (in my understanding of the PEF policy, i.e. compulsory PEF to enter the EU market, demand for product foot printing is going to skyrocket in the coming years).

Have a nice week-end; amazing spring day in Stockholm; trees ready to burgeon; went to some seed exchange event: the garden will soon be full of plants. If you pass by Stockholm, let me know ;-)


Re: Post-hackathon ontology group #ontology #followup

Elias Sebastian Azzi
 

Hello,

Thanks for your input.

* You should have received an invitation, Google Drive file. Overleaf is interesting indeed; if a majority wants it, we can switch.

 

* Web Conference and paper format: to be settled collectively; next Zoom meeting is on Friday, right?

 

* Regarding, Bo's point on product footprint; then should add also background on the EU PEF that is soon to be released (in my understanding of the PEF policy, i.e. compulsory PEF to enter the EU market, demand for product foot printing is going to skyrocket in the coming years).

Have a nice week-end; amazing spring day in Stockholm; trees ready to burgeon; went to some seed exchange event: the garden will soon be full of plants. If you pass by Stockholm, let me know ;-)


Re: Post-hackathon ontology group #ontology #followup

Bo Weidema
 

Fed konference-venue :-)

Bo

Den 2019-03-30 kl. 13.26 skrev Matteo Lissandrini (AAU):

This is one that I would like to try
--


Re: Post-hackathon ontology group #ontology #followup

Massimo Pizzol
 

Thanks Elias

Good outline.

I would prefer using Goole docs for a draft. if this is to become a sci paper it can't be a public document but it should be accessible to contributors only and via invitation.

Massimo
--

Massimo Pizzol

DCEA | Department of Planning
Aalborg University (DK)

Phone: 45 9940 8369
Blog: moutreach.science

________________________________________
From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Bo Weidema via Groups.Io [bo.weidema=bonsai.uno@groups.io]
Sent: Saturday, March 30, 2019 10:37 AM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

Thanks for taking the initiative. I find google.docs to be appropriate for this kind of work. If you prefer an open tool, a less advanced option is a riseup.pad

I would like to distinguish between the BONSAI core, which is what is absolutely needed to perform BONSAI footprints, and then have a separate add-on to enable its more general use in Industrial Ecology, mainly an expansion with the concepts of "stock", "assets" and "behavioural rules" linked to agents. All concepts need to be supported by actual data examples and validations.

Best regards

Bo

Den 2019-03-30 kl. 09.55 skrev Elias Sebastian Azzi:

Hej allihop,

I guess all of us have an outline in mind for the ontology-related paper. Here is what I have in mind; I guess we can build on it, iterate and agree on a suitable structure and contents.
Not sure if groups.io is the best way to keep track of edits in the long run; not sure that google.drive suits everyone either; GitHub would be public; we use BOX.com internally.

INTRODUCTION

§ Paragraph IE

IE's object of study is SEM.

The accounting of social, environmental and economic flows in time and space.

Many fields of sustainability and decision-making towards sustainability rely on such accounting.

Several approaches can exist: MFA / LCA / IOA; but all share the same structure; despite different vocabularies/inconsistencies (ref).

Call to lift up IE methods: Reference to Pauliuk

Call to generalise: Bo, 2018





§ Paragraph CS

* Ontology definition
* Linked Data, Web of Data, background, e.g. in life sciences and what it has enabled
* Open-source



§ IE and CS

Acknowledge previous efforts: LCA ontologies existing, IE GitHub repo open source,…

Still, as of today:

* Conventional databases, ecoinvent; exiobase, have not yet made the move towards these flexible data structures.
* Remains a challenge to have interoperability between databases
* Validation of databases
* Transparency of assumptions (constructs / system model)
* Updates of databases



§ Bonsai organisation, strech goals, hackhathon

* BONSAI organisation / Working rules / BEP process
* Strech goals; highlight the sub-goal of the ontology group
* Hackhathon, background info



§ Paper's aim

* Describe the ontology developed, providing examples and possible extensions
* Report on BEP process, main choices, alternatives left out + reproducability



§ Outline plan of article

[not sure an IMRaD structure is best suited, maybe better to have numbered section]

Section 2 = BEP process, rules, principles

Section 3 = Ontology description (link to online documentation)

Section 4 = Examples (unlinked, linked)

Section 5 = Advanced example, ideally with actual URIs, online;

Section 6 = Discussion of key choices, extensions per field, future enhancements, validation of rdfs

Section 7 = Conclusions, link to other working groups (e.g. correspondance tables, rdf triplestore)



SECTION 2 BEP process, rules, principles

Rules for agreeing; definition; BEPs

Principles:

* Minimalist principle: core; complementary ontologies for different fields; extensions upcoming
* On the distinction between A and B; for IA: atmospjhere as a process



SECTION 3 Ontology description

Table: 3 columns: Key vocabulary in the ontology; labels as in ontology; other usual names given in different fields (LCA, MFA, IOA, LCC, etc)



Figure: online viz of rdfs



§ Text description of:

* Classes
* Sub-classes
* Properties, balanceable properties, …
* Sub activities: market, transport, …



§ External ontologies

* Measure, location,…



§ To add:

* Uncertainty, …





EXAMPLE / APPLICATION [Section 4 , 5]

§ Current hosting of the ontology; examples are available at the URIs mentioned and on GitHubRepo



§ Several examples with figures, from simple to complex

Example Raw data; System model

Example with transport, market, impact assessment



DISCUSSION [based on BEPs, Section 6]

* Validation [oops thing]
* Why we have made certain choices
* Alternatives left out
* Reproducibility
* Extensions: field specific extensions, agent theory (usually lacking in our field), linkages with other resources on WebOfData
* Give one example of "re-use" by others of the ontology: accounting framework for Stockholm municipality (REFLOW project), how it fits in the ontology (or not) + ref to



CONCLUSIONS / VISION /EXTENSIONS [Section 7]

* how we see them;
* e.g. GDP, economic data
* agent theory (unless we want to add it to the ontology right away)





Some references



(1) Pauliuk, S.; Majeau-Bettez, G.; Mutel, C. L.; Steubing, B.; Stadler, K. Lifting Industrial Ecology Modeling to a New Level of Quality and Transparency: A Call for More Transparent Publications and a Collaborative Open Source Software Framework. J. Ind. Ecol. 2015, 19 (6), 937–949; DOI 10.1111/jiec.12316.

(2) Pauliuk, S.; Majeau-Bettez, G.; Müller, D. B. A General System Structure and Accounting Framework for Socioeconomic Metabolism. J. Ind. Ecol. 2015, 19 (5), 728–741; DOI 10.1111/jiec.12306.

(3) Pauliuk, S.; Majeau-Bettez, G.; Müller, D. B.; Hertwich, E. G. Toward a Practical Ontology for Socioeconomic Metabolism. J. Ind. Ecol. 2016, 20 (6), 1260–1272; DOI 10.1111/jiec.12386.

(4) Weidema, B. P.; Schmidt, J.; Fantke, P.; Pauliuk, S. On the Boundary between Economy and Environment in Life Cycle Assessment. Int. J. Life Cycle Assess. 2018, 23 (9), 1839–1846; DOI 10.1007/s11367-017-1398-4.

--
[cid:part11.53D77068.B15291DB@...]


Re: Post-hackathon ontology group #ontology #followup

Matteo Lissandrini (AAU)
 

Hi all,
I think all this discussion depends highly on the venue and on the "style" of paper.

I would like to send a paper to a semantic web venue, we can consider also a semantic web journal.

This is one that I would like to try

For this kind of paper I would use latex with overleaf, or through git.
In both cases I would move the practical discussion in private correspondence + slack among all those interested.

Kind regards,
Matteo



From: hackathon2019@bonsai.groups.io [hackathon2019@bonsai.groups.io] on behalf of Bo Weidema via Groups.Io [bo.weidema@...]
Sent: Saturday, March 30, 2019 10:37 AM
To: hackathon2019@bonsai.groups.io
Subject: Re: [hackathon2019] Post-hackathon ontology group #ontology #followup

Thanks for taking the initiative. I find google.docs to be appropriate for this kind of work. If you prefer an open tool, a less advanced option is a riseup.pad

I would like to distinguish between the BONSAI core, which is what is absolutely needed to perform BONSAI footprints, and then have a separate add-on to enable its more general use in Industrial Ecology, mainly an expansion with the concepts of "stock", "assets" and "behavioural rules" linked to agents. All concepts need to be supported by actual data examples and validations.

Best regards

Bo

Den 2019-03-30 kl. 09.55 skrev Elias Sebastian Azzi:

Hej allihop,

I guess all of us have an outline in mind for the ontology-related paper. Here is what I have in mind; I guess we can build on it, iterate and agree on a suitable structure and contents.
Not sure if groups.io is the best way to keep track of edits in the long run; not sure that google.drive suits everyone either; GitHub would be public; we use BOX.com internally.

INTRODUCTION

§ Paragraph IE

IE's object of study is SEM.

The accounting of social, environmental and economic flows in time and space.

Many fields of sustainability and decision-making towards sustainability rely on such accounting.

Several approaches can exist: MFA / LCA / IOA; but all share the same structure; despite different vocabularies/inconsistencies (ref).

Call to lift up IE methods: Reference to Pauliuk

Call to generalise: Bo, 2018

 

 

§ Paragraph CS

  • Ontology definition
  • Linked Data, Web of Data, background, e.g. in life sciences and what it has enabled
  • Open-source

 

§ IE and CS

Acknowledge previous efforts: LCA ontologies existing, IE GitHub repo open source,…

Still, as of today:

  • Conventional databases, ecoinvent; exiobase, have not yet made the move towards these flexible data structures.
  • Remains a challenge to have interoperability between databases
  • Validation of databases
  • Transparency of assumptions (constructs / system model)
  • Updates of databases

 

§ Bonsai organisation, strech goals, hackhathon

  • BONSAI organisation / Working rules / BEP process
  • Strech goals; highlight the sub-goal of the ontology group
  • Hackhathon, background info

 

§ Paper's aim

  • Describe the ontology developed, providing examples and possible extensions
  • Report on BEP process, main choices, alternatives left out + reproducability

 

§ Outline plan of article

[not sure an IMRaD structure is best suited, maybe better to have numbered section]

Section 2 = BEP process, rules, principles

Section 3 = Ontology description (link to online documentation)

Section 4 = Examples (unlinked, linked)

Section 5 = Advanced example, ideally with actual URIs, online;

Section 6 = Discussion of key choices, extensions per field, future enhancements, validation of rdfs

Section 7 = Conclusions, link to other working groups (e.g. correspondance tables, rdf triplestore)

 

SECTION 2  BEP process, rules, principles

Rules for agreeing; definition; BEPs

Principles:

  • Minimalist principle: core; complementary ontologies for different fields; extensions upcoming
  • On the distinction between A and B; for IA: atmospjhere as a process

 

SECTION 3 Ontology description

Table: 3 columns: Key vocabulary in the ontology; labels as in ontology; other usual names given in different fields (LCA, MFA, IOA, LCC, etc)

 

Figure: online viz of rdfs

 

§ Text description of:

  • Classes
  • Sub-classes
  • Properties, balanceable properties, …
  • Sub activities: market, transport, …

 

§ External ontologies

  • Measure, location,…

 

§ To add:

  • Uncertainty, …

 

 

EXAMPLE / APPLICATION [Section 4 , 5]

§ Current hosting of the ontology; examples are available at the URIs mentioned and on GitHubRepo

 

§ Several examples with figures, from simple to complex

Example Raw data; System model

Example with transport, market, impact assessment

 

DISCUSSION [based on BEPs, Section 6]

  • Validation [oops thing]
  • Why we have made certain choices
  • Alternatives left out
  • Reproducibility
  • Extensions: field specific extensions, agent theory (usually lacking in our field), linkages with other resources on WebOfData
  • Give one example of "re-use" by others of the ontology: accounting framework for Stockholm municipality (REFLOW project), how it fits in the ontology (or not) + ref to

 

CONCLUSIONS / VISION /EXTENSIONS [Section 7]

  • how we see them;
  • e.g. GDP, economic data
  • agent theory (unless we want to add it to the ontology right away)

 

 

Some references

 

(1)     Pauliuk, S.; Majeau-Bettez, G.; Mutel, C. L.; Steubing, B.; Stadler, K. Lifting Industrial Ecology Modeling to a New Level of Quality and Transparency: A Call for More Transparent Publications and a Collaborative Open Source Software Framework. J. Ind. Ecol. 2015, 19 (6), 937–949; DOI 10.1111/jiec.12316.

(2)     Pauliuk, S.; Majeau-Bettez, G.; Müller, D. B. A General System Structure and Accounting Framework for Socioeconomic Metabolism. J. Ind. Ecol. 2015, 19 (5), 728–741; DOI 10.1111/jiec.12306.

(3)     Pauliuk, S.; Majeau-Bettez, G.; Müller, D. B.; Hertwich, E. G. Toward a Practical Ontology for Socioeconomic Metabolism. J. Ind. Ecol. 2016, 20 (6), 1260–1272; DOI 10.1111/jiec.12386.

(4)     Weidema, B. P.; Schmidt, J.; Fantke, P.; Pauliuk, S. On the Boundary between Economy and Environment in Life Cycle Assessment. Int. J. Life Cycle Assess. 2018, 23 (9), 1839–1846; DOI 10.1007/s11367-017-1398-4.

--