6
Productification for Collaborative Semantic Modeling Ossi A. Nykänen Tampere University of Technology Korkeakoulunkatu 10 FI-33720 Tampere, Finland +358 40 849 0730 [email protected] ABSTRACT The lack of proper authoring tools provides a practical but a very significant challenge for adopting new technologies and applications concepts. In this article, we identify a development strategy called "productification", which suggests transforming some application-specific authoring tasks into tasks of using common-purpose authoring tools and systems. In brief, the objective is typically to transform and harness the power of the existing office and productivity applications for various authoring etc. tasks, while consuming the information in novel applications, also learning from experience. To demonstrate and discuss this approach, we present a case study of using spreadsheets in collaborative semantic modeling of a core curriculum. Besides pointing out an efficient approach for collaborative semantic modeling, we use the productified authoring system to analyze the related core curriculum development process, making observations that help designing machine-understandable, high- quality core curriculums. Categories and Subject Descriptors I.7.2 [Document and Text Processing]: Document Preparation desktop publishing, languages and systems H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces computer-supported cooperative work I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods semantic networks K.3.2 [Computers and Education]: Computer and Information Science Education Curriculum General Terms Documentation, Design, Human Factors, Languages Keywords Structured Documents, Productification, Semantic Modeling, Curriculum Design 1. INTRODUCTION The lack of proper authoring tools provides a practical but a very significant challenge for adopting new technologies and applications concepts. This easily leads into chicken and egg problems since the availability of authoring tools (etc.) often establishes adoption criteria for new technologies and applications. As a consequence, some potential new technologies and applications may never get accepted, for the loss of the entire domain. In this article, we identify a development strategy called "productification", which suggests transforming some application- specific authoring tasks into tasks of using common-purpose, mainstream authoring tools and systems. In brief, the objective is typically to transform and harness the power of the existing office and productivity applications for various authoring etc. tasks, while consuming the information in novel applications. In order to demonstrate and discuss this approach, we present a case study of using spreadsheets in the collaborative semantic modeling of a core curriculum. This application domain is of particular interest to us. It is closely related to an undergoing development project eOps 3.0 funded by National Board of Education, which investigates the possibilities of structured, machine-understandable curricula [7]. The application domain of learning ontology design [4] is also quite challenging in itself, since it points out the need of considering curriculum development also from the viewpoint of model design, and not simply, say, narrative word processing. From a pedagogical point of view, a major motivation for seeking machine-understandable curriculum is the intuitive objective of associating learning topics with (freely) available learning content (for instance, [15]). Besides pointing out an efficient approach for collaborative semantic modeling, we use the productified authoring system to analyze the related core curriculum development process, making observations that help designing high-quality core curriculums. The suggested approach allows rapid, asynchronous development of applications, and including non-technical content experts to the development process at an early stage. The basic idea of using commonly available tools for specific tasks is obviously not new. Several common office tool suites include macro and full-fledged programming environments, which enables using them as frameworks for developing all kinds of applications [22]. Thus, the distinction between "office tools" and "novel applications" is not black and white. Most people, however, might use the common office and productivity tools in their basic form, i.e. without visible add-ons. Further, the freeware versions might not include all the programming features [8]. The lack of suitable authoring tools has motivated using e.g. office tools in domains where specific tools have not been Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. AcademicMindTrek '14, November 04 - 07 2014, Tampere, Finland Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-3006-0/14/11...$15.00 http://dx.doi.org/10.1145/2676467.2676470

Productification for Collaborative Semantic Modeling¶järvi/eops/materiaalia/no2pfcsm...Productification for Collaborative Semantic Modeling Ossi A. Nykänen Tampere University of

Embed Size (px)

Citation preview

Productification for Collaborative Semantic ModelingOssi A. Nykänen

Tampere University of Technology Korkeakoulunkatu 10

FI-33720 Tampere, Finland +358 40 849 0730

[email protected]

ABSTRACT The lack of proper authoring tools provides a practical but a very significant challenge for adopting new technologies and applications concepts. In this article, we identify a development strategy called "productification", which suggests transforming some application-specific authoring tasks into tasks of using common-purpose authoring tools and systems. In brief, the objective is typically to transform and harness the power of the existing office and productivity applications for various authoring etc. tasks, while consuming the information in novel applications, also learning from experience. To demonstrate and discuss this approach, we present a case study of using spreadsheets in collaborative semantic modeling of a core curriculum. Besides pointing out an efficient approach for collaborative semantic modeling, we use the productified authoring system to analyze the related core curriculum development process, making observations that help designing machine-understandable, high-quality core curriculums.

Categories and Subject Descriptors I.7.2 [Document and Text Processing]: Document Preparation – desktop publishing, languages and systems

H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces – computer-supported cooperative work

I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods – semantic networks

K.3.2 [Computers and Education]: Computer and Information Science Education – Curriculum

General Terms Documentation, Design, Human Factors, Languages

Keywords Structured Documents, Productification, Semantic Modeling, Curriculum Design

1. INTRODUCTION

The lack of proper authoring tools provides a practical but a very significant challenge for adopting new technologies and applications concepts. This easily leads into chicken and egg problems since the availability of authoring tools (etc.) often establishes adoption criteria for new technologies and applications. As a consequence, some potential new technologies and applications may never get accepted, for the loss of the entire domain.

In this article, we identify a development strategy called "productification", which suggests transforming some application-specific authoring tasks into tasks of using common-purpose, mainstream authoring tools and systems. In brief, the objective is typically to transform and harness the power of the existing office and productivity applications for various authoring etc. tasks, while consuming the information in novel applications.

In order to demonstrate and discuss this approach, we present a case study of using spreadsheets in the collaborative semantic modeling of a core curriculum. This application domain is of particular interest to us. It is closely related to an undergoing development project eOps 3.0 funded by National Board of Education, which investigates the possibilities of structured, machine-understandable curricula [7]. The application domain of learning ontology design [4] is also quite challenging in itself, since it points out the need of considering curriculum development also from the viewpoint of model design, and not simply, say, narrative word processing. From a pedagogical point of view, a major motivation for seeking machine-understandable curriculum is the intuitive objective of associating learning topics with (freely) available learning content (for instance, [15]).

Besides pointing out an efficient approach for collaborative semantic modeling, we use the productified authoring system to analyze the related core curriculum development process, making observations that help designing high-quality core curriculums. The suggested approach allows rapid, asynchronous development of applications, and including non-technical content experts to the development process at an early stage.

The basic idea of using commonly available tools for specific tasks is obviously not new. Several common office tool suites include macro and full-fledged programming environments, which enables using them as frameworks for developing all kinds of applications [22]. Thus, the distinction between "office tools" and "novel applications" is not black and white. Most people, however, might use the common office and productivity tools in their basic form, i.e. without visible add-ons. Further, the freeware versions might not include all the programming features [8].

The lack of suitable authoring tools has motivated using e.g. office tools in domains where specific tools have not been

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

AcademicMindTrek '14, November 04 - 07 2014, Tampere, Finland Copyright is held by the owner/author(s). Publication rights licensed to ACM.

ACM 978-1-4503-3006-0/14/11...$15.00 http://dx.doi.org/10.1145/2676467.2676470

currently available. Well-known examples include developing beta versions of authoring tools and information management infrastructures using spreadsheets, with known applications ranging from evaluating the quality of public web services to planning space missions [21][11]. More recently, the rapid introduction of cloud-based tools has introduced additional properties to office suites, notably browser-based editing, easy versioning, and sharing [9]. In our case, related ideas were first explicitly adopted when authoring interactive exercises for university-level mathematics [16]. Since then, the idea has been applied in different forms without a common term.

In certain application domains where the lack of authoring tools is most evident and data-driven applications are common, specific mapping technologies have been proposed and standardized. In particular, in the case of Semantic Web [24] applications, standard technologies have been introduced, e.g., to mapping relational database and semantic network models (R2RML), and obtaining RDF data from well-defined document formats (GRDDL) [6][5]. Further, Semantic Web standardization is currently moving into using CSV format as a serialization syntax [23], which by no doubt raises interests also towards spreadsheet applications.

With respect to the state-of-the-art, the main contribution of this article is twofold. We synthetize the idea of productification and point out its main use cases, emphasizing the role of supportive authoring in novel applications. We then report and analyze a case study, using the results as a stepping-stone in understanding both the implementation and scope of productification, and machine-understandable core curriculum design.

The rest of this article is organized as follows: In Section 2 we briefly introduce the productification concept. This is then applied in terms of a semantic modeling case study in Section 3, where we outline the related productified authoring system architecture and introduce and analyze the main steps, with respect to core curriculum design. In Section 4 we conclude the article.

2. PRODUCTIFICATION CONCEPT We define productification as transforming some application-specific authoring tasks into tasks of using common-purpose, typically mainstream authoring tools and systems. This involves specifying how authors should use the generic tools in a specific way, and implementing the necessary software for accessing the resulting data. The main use cases of productification include:

1. Introducing low-cost authoring tools for novel applications where specific tools do not (yet) exist.

2. Simplifying certain parts of the authoring process by using tools already familiar to authors.

3. Getting experiences and feedback of the authoring process for analysis and understanding, with the option of making low-cost experiments.

A productified authoring system is set up by a chief designer or an information architect, in collaboration with the content authors. From the management point of view, the objectives, scope, and success criteria for productification must be explicitly defined, in order to evaluate the and control the productification process.

Figure 1. Key elements of a productified authoring system. Figure 1 outlines the key elements of a productified authoring system. In brief, the system includes four main components. The novel application denotes the target system, purposes of which content (etc.) is created. Authoring takes place in two forms, in primary and supportive authoring, typically aimed for technical and non-technical authors. Supportive authoring is carried out using some mainstream authoring etc. productivity tools, output of which is adapted to the novel application. There are also feedback loops that help verifying that supportive authoring is in the right track.

While the primary authoring tools typically depend upon the application, many common editor applications etc. suffice for supportive authoring tools. Typical examples include office suites [22], such as spreadsheet editors and word processors. This also means that while adapters of course depend on the novel applications, some internal adapter components may be reused (notably parses). Since high-quality office tools are available both for local installation and as shared Web applications [9], productification may also provide facilities for shared authoring.

3. A CASE STUDY: SEMANTIC MODELING OF A CORE CURRICULUM In Finland, the Finnish National Board of Education has begun to prepare the new national core curriculum for basic and pre-primary education. The new curriculum will be based on the Decree on national objectives and distribution of teaching hours in basic education (422/2012), issued by the Government in June 2012. The preparation is carried out in working groups that focus on structure and objectives, conceptions of learning, support for learning and the different subjects taught in basic education. [7]

The work involves specifying sets of learning objectives, evaluation criteria, etc. for the various subjects. In practice, before specific core curriculum design tools are available, the work is carried out in distributing word processor documents in narrative form, where relevant information is encoded as tables. At the time of writing, the development work is still underway.

3.1 Problem Statement To investigate syntactic and semantic modeling related to the core curriculum development, we have analyzed few of the resulting documents, taking the physics draft [17] as a basis of our work.

It should be emphasized that this means not taking the existing semantic teaching and physics curricula modeling approaches explicitly into account (see, e.g., [4][3][15]). It should be obvious, however, that developing a Finnish core curriculum should benefit from integration with the international activities of the field. Our vision is that eventually, core curriculum development work should output a formal model of the domain with appropriate level of detail, which would support the following kinds of use cases:

• UC1: Curriculum analysis (also during curriculum development)

• UC2: Searching for concepts (classes) and individual learning topics related to particular natural phenomena, learning objective, or evaluation criteria, and vice versa (during teaching and studying)

• UC3: Associating topics with the available learning material in a machine-understandable way (perhaps due 3rd party authoring)

From the perspective of semantic modeling and computing, most of the related, detailed use cases could be captured in terms of (more detailed) search and inference test cases.

According to the current curriculum work description, however, the use cases cannot be fully met since planning and development are based on natural language with little formal modeling. Thus, it is expected that human teachers process the curriculum.

Figure 2. An excerpt from the physics curriculum draft [17]. The overall structure of the curriculum work, however, can be formally captured with reasonable efforts. Figure 2 above depicts an example of the physics core curriculum draft [17], and illustrates that many of the central structures are presented as tables or lists. This allows accessing the main information categories (e.g. objective area), but since no formal controlled vocabulary is associated, leaves the individual learning topics (e.g. electromagnetism) unspecified. Thus, while the syntax and semantic identity of the underlying information would be ideally specified in terms of some machine-readable metadata standard (such as RDFa [1]), it is still relatively easy to use the productification approach to capture the data.

Productification thus provides not only easy access to the sort of bureaucratic framework of the data, but also suggests how the curriculum planning efforts could be improved from the perspective of machine-understandability (including individual learning topics), without migrating into expert design systems.

3.2 Productification for Collaborative Semantic Authoring Today, the de facto standard for logic-based semantic modeling is the Web Ontology Language [12]. The related W3C Semantic Web technologies provide facilities for linking, searching, reasoning with, and visualizing related data [24]. We would thus like to use these technologies and the related methods for semantically capturing the core physics curriculum.

Figure 3 depicts a simple productification case system suitable for OWL ontology design and analysis. In order to support the semantic computing use cases UC1-UC3 mentioned above, we use the Protégé ontology editor [18] for ontology modeling, reasoning, and visualization. It is worth observing that the Protégé development team has also acknowledged the importance of shared authoring, and provides a (lite) Webized version of the ontology editor called WebProtege.

Figure 3. A productified authoring system for collaborative semantic modeling. To support intuitive authoring of the physics curriculum, supportive authoring is organized using the Apache OpenOffice (OO) [2] Spreadsheet editor, with specific authoring instructions how to edit relevant data. This enables easy authoring of a subset of the complete ontology model. The resulting data is transformed into OWL using an adapter implemented with a relatively simple Python [19] script. The script includes two modules, a generic OpenOffice spreadsheet parser, and a specific adapter, which allows mapping certain spreadsheet structures into OWL axioms in standard RDF/XML [20]. Verification feedback is provided from both the adapter (e.g. a syntax error) and the Protégé ontology editor (e.g. a missing link or inconsistency), to ensure that supportive authoring meets the quality criteria of the task.

In our case study, this setting could be also used to analyze and develop the quality of the physics curriculum, by specifying additional, more specific test data, and including so-called probe classes, so that the use cases SC1-SC3 could be formally verified using the Protégé system appropriately.

Figure 4. Supportive semantic authoring using the OpenOffice spreadsheet editor. Figure 4 above depicts an example spreadsheet, according to the supportive authoring perspective. In brief, the spreadsheet is created by first copy/pasting the relevant structures from the physics curriculum draft, normalizing the structures, adding semantic labels and references, and assigning things explicit

identifiers, when needed. Once the structure is in place, supportive authoring does not require too specific technical skills. Note that since the structure of the original physics draft is not machine-understandable, these preliminary steps needs to be manually refreshed whenever the original curriculum draft changes.

Supportive authoring now involves editing a number of spreadsheets. Each sheet has a specific structure recognized by the adapter. Our implementation supports three kinds of sheets: namespace prefix definitions (Namespaces), hierarchical definitions (Category), and property table definitions (Properties).

A namespace prefix definition simply introduces all the known namespaces. It is needed for the namespace shorthand notations used in the other kinds of sheets. As a technical detail, note that appropriate namespace definition allows specifying that a local name T1 is used for identifying a physics objective.

A hierarchy definition specifies a tree structure of hierarchical concepts. It can be used to intuitively express, e.g., that PhysicsContentArea is a subclass of ContentArea.

A property table includes rows of descriptions about a specific thing with a URI or blank node identifier. (The example in Figure 4 shows a property table.) Column headers describe the types of properties assigned using individual cells. The structure allows asserting both object and data properties, and provides shorthands for introducing URI names for things, and asserting many properties at once. For instance, with appropriate column headers, a cell value (e.g. "T1" or "S1 S4") may yield a data property (header dc:title) with an optional datatype definition (next adjacent column #DATATYPE specifying the datatype), two object properties with blank node identifiers (ops:relatedContentArea #LIST), or two object properties created from local names by adding a namespace prefix (ops:relatedContentArea #LIST &opsind;). As a consequence, a property table definition can be used to precisely express several semantic assertions at once, with the intuitive meaning similar to the semi-structured text in Figure 2.

The OWL adapter is generic and makes no commitments with the physics content. Thus, in principle any kind of content could be captured accordingly, simply by changing the descriptions. Note that the adapter might be considered as a special case of GRDDL [5], but this technology was not applied in the case study.

In principle, it would be possible to introduce spreadsheet patterns also for asserting any kinds of OWL axioms. In this case it would clearly be an overkill since supportive authoring is most suitable for only certain kinds of OWL modules, and one can author other OWL modules with other tools, including the Protégé editor.

Figure 5. Simple supportive modeling (spreadsheet) statistics. Finally, supportive authoring tools may also provide easy access to other information that helps understanding the authoring process. For instance, it is simple to add a programmatic spreadsheet to access the simple semantic modeling statistics, e.g. for showing the distribution of individuals in the main classes (see Figure 5 above). With extra efforts, more comprehensive analysis views might of course be developed to visualize also the

extralogical process properties, such as use cases, "tickets", distribution of edits, etc., if any. At some point, introducing additional (expert) tools for supportive authoring might be required, including genuine project management tools.

Figure 6. Shared spreadsheet authoring in Google Drive. Adopting OpenOffice spreadsheet standard format (ODS) also supports collaborative authoring. When collaboration using OO file sharing (e.g. via email or version control system) seems limited, it is also possible to move the collaboration process into the Google Drive [9] (see Figure 6). Since Google Drive supports exporting spreadsheet in the ODS format, no changes in the adapter are needed. In brief, this allows not only sharing but also (a)synchronously editing data in terms of supportive authoring.

Figure 7. Analyzing the adapted model in Protégé. Figure 7 above depicts the data outputted by the Ods2owl adapter, opened in the Protégé ontology editor. In practice, it is possible to exploit the data as an ontology module of some existing ontology, or simply open it as a single ontology as above.

Figure 8. OntoGraf visualization of the semantic model.

Finally, Figure 8 above depicts an OntoGraf visualization of the resulting semantic model as outputted by the adapter, i.e. without any additional modules specified in Protégé. Semantic links between individuals are not visible in the picture. The authoring, query, inference, and visualization tools can be used to analyze and further develop the underlying semantic model. Resulting insight also provides crucial feedback to supportive authoring.

Please note that the model includes only the structures clearly present in the core physics curriculum draft. This indicates that in its current form, the use cases UC1-UC3 cannot be met at the given model granularity level since only the very general curriculum concepts are identified. Note that this simply reflects the fact that such use cases very not explicitly defined in the original working group task description. The curriculum is of course highly useful, but requires careful human interpretation.

3.3 Discussion Productification works quite nicely in our case study, and provides a concrete system for collaborative semantic authoring using the familiar spreadsheet tools. In fact, when compared to the full-fledged Protégé editor, the spreadsheet approach also gives a better view to the structure of the core curriculum concepts, and explicitly points out the granularity of the model. Perhaps the biggest technical challenge is the manual work needed for setting up the spreadsheet structure in the beginning, and the occasional syntax errors due supportive authoring mistakes. At first, using a spreadsheet editor instead of a word processor in curriculum design might only seem like a practical compromise since it might of course be better if semantic descriptions were seamlessly added to the original text. Common WYSIWYG word processors, however, provide only limited facilities for non-experts semantically describing data. Solely from the perspective of text editing, a better option might be using structured text editors for the editing of the original draft. In this case, however, this might introduce difficulties for the authoring process with non-experts, and would introduce additional constraints for asynchronous authoring. And again, entering semantic descriptions "between the lines" might prevent understanding the formal structure of the data.

Either way, core curriculum planning and narrative document editing are complementary, but not completely overlapping activities. Ideally, core curriculum design should thus first and foremost take place as a collaborative modeling activity, result of which is reported using a narrative form – not vice versa. This in fact suggests favoring a spreadsheet editor over word processor, with the eventual objective of introducing more comprehensive modeling tools for the task. Once the content and the semantic description of the core curriculum is finished, it can be reported narratively (perhaps including parts compiled automatically), and distributed both in human and machine-understandable formats.

It is also important to observe that no model is good by its own, only with respect to some specified design criteria. Thus, the semantic modeling activity should take place with respect to explicitly identified design requirements, including refined tests for the use cases (e.g. UC1-UC3) and dependencies (e.g. associating 3rd party learning material, using Semantic Web tools and technologies). Expert work is still required – but in a new, perhaps more abstract but productive form. This discussion points out that an abstract modeling approach would indeed be needed: designing a good quality core

curriculum as a long narrative text document without any design system help is a very challenging task, simply due to the size of the document. At the time of writing, the current (April 15 2014) public core curriculum draft [17] for school classes 7-9 including all teaching subjects, includes around 250 narrative (A4) pages.

It is fair to say, however, that visualizing large models is also challenging, even with machine-readable data. A proper way of analyzing complicated models takes place in, e.g., terms of sets of query tests (etc.), instead of solely looking at visual diagrams. Thus, while visualizations like the one in Figure 8 are very helpful in getting insight in modeling, they should only be used as support tools in a more systematic use case analysis. The problem is that large semantic networks and other datasets tend to yield very complicated information visualizations. Thus, visualizing only certain aspects of the data at a time is typically favored. For instance, in Figure 8 we ignored the (numerous) relationships between individuals (e.g. each of the Physics Objectives T1-T15 is related to 2-6 Physics Content Areas S1-S6).

Finally, our productification case study suggests some improvements in the design of the national core curriculum in general. Suggestions include clearly identifying names of the important information items so that things can be properly typed, described and linked, introducing a type system for the main (learning domain) concepts, carefully distinguishing where things are defined and where only referenced, and normalizing design so that the definitions are consistent. In addition to adding structures to meet the case study objectives (UC1-UC3), perhaps the biggest observation is that in its current form, no controlled vocabularies were used when e.g. describing the learning objectives. This means that the granularity of modeling is not sufficient for, e.g., formally associating objectives, evaluation criteria, and learning material on the level of individual learning topics such as electromagnetism, even if this might be wished intuitively.

Please note that at minimum, introducing a sufficient taxonomy for describing the core physics curriculum would not have to involve tedious and philosophical modeling of physics in general (which of course has its own merits [10][14]) – only enumerating the core terms or the syllabus used in the descriptions. If more complicated terminologies are needed in the use cases, the commonly known concept modeling technologies including SKOS [13] and OWL [12] might be used.

4. CONCLUSION In this article, we have introduced the concept of productification, i.e., transforming some application-specific authoring task into a task of using common-purpose authoring tools and systems.

We have demonstrated and analyzed this approach in the case study of collaborative semantic modeling of core curriculum of physics. Besides pointing out an efficient approach for collaborative semantic modeling, we have used the productified authoring system to analyze the related core curriculum development process, making observations that help designing high-quality core curriculums. For the case study domain, the main observation might be the question of introducing controlled (physics etc.) vocabularies that would allow identifying and referring individual learning topics for refined semantic linking and machine-understandability.

In our case study, the main challenges arguably stem from the fact that modeling requires different kind of approach to authoring, at least when compared to writing a narrative text document. Thus,

even if productification provides low-cost access to collaborative authoring of semantic models using easy-to-use mainstream supportive tools, the objective of semantic modeling in itself requires rethinking some aspects of the authoring process. This points out the importance of planning how tools (e.g. spreadsheet) are used in the process, clearly acknowledging the main deliverables of the work (e.g. formal core curriculum model and/or narrative report), and establishing a formal validation mechanism for the work (e.g. query test cases). Of course, in some other, simpler application domains, such challenges might not be present.

Finally, for purposes of our case study, please note that we have defined productification (only) in the context of authoring. We believe that the concept might be straightforwardly extended to cover other development etc. activity areas as well.

5. ACKNOWLEDGMENTS This work has been partly supported by the eOps 3.0 project, funded by National Board of Education; thanks to Project Manager Jari Halonen from City of Ylöjärvi for case study discussions and liaison activities between different organizations. Thanks also to Project Manager Pekka Ranta from Tampere University of Technology and Senior Lecturer Jukka Tohu from Tampere University of Applied Sciences.

6. REFERENCES [1] Adida, B., Birbeck, M., McCarron, S., and Herman, I., 2013.

RDFa Core 1.1 - Second Edition. Syntax and processing rules for embedding RDF through attributes. W3C Recommendation 22 August 2013. Available at http://www.w3.org/TR/rdfa-core/ .

[2] Apache OpenOffice. The Apache Software Foundation. Available at http://www.openoffice.org .

[3] BBC Curriculum Ontology, version 1.2. Created Date: 2013/10/11 12:18:00. Available at http://www.bbc.co.uk/ontologies/curriculum .

[4] Chung H-S. and Kim, J-M. 2012. Ontology Design for Creating Adaptive Learning Path in e-Learning Environment. Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 (IMECS2012), Hong Kong, 14-16 March, 2012.

[5] Connolly, D. 2007. Gleaning Resource Descriptions from Dialects of Languages (GRDDL). W3C Recommendation 11 September 2007. Available at http://www.w3.org/TR/grddl/ .

[6] Das, S., Sundara, S., and Cyganiak, R. 2012. R2RML: RDB to RDF Mapping Language. W3C Recommendation 27 September 2012. Available at http://www.w3.org/TR/r2rml/ .

[7] Finnish National Board of Education. OPS 2016 – Renewal of the core curriculum for pre-primary and basic education. Current issues 6.2.2013. Available at http://www.oph.fi/english/current_issues/101/0/ops2016_renewal_of_the_core_curriculum_for_pre-primary_and_basic_education .

[8] Gizmo's Freeware. Best Free Office Suite. Gizmo's Freeware. Available at http://www.techsupportalert.com/best-free-office-suite.htm .

[9] Google Drive. Overview of Google Drive. Google 2014. Available at https://support.google.com/drive/answer/2424384?hl=en .

[10] Gupta, A., Hammer, D., and Redish, E. F. 2010. The Case for Dynamic Models of Learners' Ontologies in Physics. Journal of the Learning Sciences, 19:3, pp. 285-321.

[11] Hihn, J., Lewicki, S., and Wilkinson, B., 2009. How Spreadsheets Get Us to Mars and Beyond. In Proceedings of the International Conference on System Sciences, 2009. HICSS '09. 42nd Hawaii, DOI: 10.1109/HICSS.2009.239, pp. 1-9.

[12] Hitzler, P., Krötzsch, M., Parsia, B., Patel-Schneider, P.F., and Rudolph, S. 2012. OWL 2 Web Ontology Language Primer (2nd Ed.). W3C Recommendation 11 December 2012. Available at http://www.w3.org/TR/owl2-primer/ .

[13] Isaac, A. and Summers, E. 2009. SKOS Simple Knowledge Organization System Primer. W3C Working Group Note 18 August 2009. Available at http://www.w3.org/TR/skos-primer .

[14] Libarkin, J.C. and Kurdziel, J.P. 2006. Ontology and the Teaching of Earth System Science. Journal of Geoscience Education, v. 54, n. 3, May, 2006, pp. 408-413.

[15] MIT OCW. High School Physics | MIT OpenCourseWare. Massachusetts Institute of Technology. Available at http://ocw.mit.edu/high-school/physics/ .

[16] Nykänen, O. 2000. A Framework for Generating Non-trivial Interactive Mathematical Exercises in the Web: Dynamic Exercises. ED-MEDIA 2000, World Conference on Educational Multimedia, Hypermedia & Telecommunications. June 26 - July 1, 2000. Montréal, Canada. pp. 1438-1439.

[17] Opetushallitus. Perusopetuksen opetussuunnitelman perusteet: Opetus vuosiluokilla 7-9, Luonnos 15.4.2014. Available at http://www.oph.fi/ops2016 .

[18] Protégé Ontology Editor. Stanford Center for Biomedical Informatics Research. Available at http://protege.stanford.edu .

[19] Python. Python Software Foundation. Available at https://www.python.org .

[20] Schreiber, G. and Raimond, Y. 2014. RDF 1.1 Primer. W3C Working Group Note 25 February 2014. Available at http://www.w3.org/TR/rdf11-primer/ .

[21] Suomi.fi. Laatua verkkoon. Suomi.fi-toimitus | Valtiokonttori. Available at http://www.suomi.fi/suomifi/tyohuone/laatua_verkkoon/ .

[22] Wikipedia. Comparison of office suites. Wikipedia. Available at http://en.wikipedia.org/wiki/Comparison_of_office_suites .

[23] W3C CSV on the Web Working Group Charter. World Wide Web Consortium (W3C). Available at http://www.w3.org/2013/05/lcsv-charter.html .

[24] W3C Data Web Activity – Building the Web of Data. World Wide Web Consortium (W3C). Available at http://www.w3.org/2013/data/