Special Issue on ICIT 2009 Conference – Applied Computing - UbiCC Journal - UbiCC Journal, , Ubiquitous Computing and Communication Journal

Embed Size (px)

Citation preview

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    1/82

    UUUBBBIIICCCCCC JJJooouuurrrnnnaaalll Ubiquitous Computing and Communication Journal

    Volume 4 Number 3 July 2009 ISSN 1992-8424

    Special Issue on ICIT 2009 Conference Applied Computing

    UBICC Publishers 2009Ubiquitous Computing and Communication Journal

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    2/82

    2

    Co-Editor Dr. AL-Dahoud Ali

    Ubiquitous Computing andCommunication Journal

    Book: 2009 Volume 4

    Publishing Date: 07-30-2009

    Proceedings

    ISSN 1992-8424

    This work is subjected to copyright. All rights are reserved whether the whole or part of the material is

    concerned, specifically the rights of translation, reprinting, re-use of illusions, recitation, broadcasting,

    reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication of

    parts thereof is permitted only under the provision of the copyright law 1965, in its current version, and

    permission of use must always be obtained from UBICC Publishers. Violations are liable to prosecution under

    the copy right law.

    UBICC Journal is a part of UBICC Publishers

    www.ubicc.org

    UBICC Journal

    Printed in South Korea

    Typesetting: Camera-ready by author, data conversation by UBICC Publishing Services, South Korea

    UBICC Publishers

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    3/82

    Guest Editors Biography

    Dr. Al-Dahoud, is a associated professor at Al-Zaytoonah University, Amman, Jordan.

    He took his PhD from La Sabianza1/Italy and Kiev Polytechnic/Ukraine, on 1996.He worked at Al-Zaytoonah University since 1996 until now. He worked as visiting professor in many

    universities in Jordan and Middle East, as supervisor of master and PhD degrees in computer science. He

    established the ICIT since 2003 and he is the program chair of ICIT until now. He was the Vice President of the

    IT committee in the ministry of youth/Jordan, 2005, 2006. Al-Dahoud was the General Chair of (ICITST-2008),

    June 2328, 2008, Dublin, Ireland (www.icitst.org).

    He has directed and led many projects sponsored by NUFFIC/Netherlands:

    - The Tailor-made Training 2007 and On-Line Learning & Learning in an Integrated Virtual Environment" 2008.

    His hobby is conference organization, so he participates in the following conferences as general chair, program

    chair, sessions organizer or in the publicity committee:

    - ICITs, ICITST, ICITNS, DepCos, ICTA, ACITs, IMCL, WSEAS, and AICCSA

    Journals Activities: Al-Dahoud worked as Editor in Chief or guest editor or in the Editorial board of the

    following Journals:

    Journal of Digital Information Management, IAJIT, Journal of Computer Science, Int. J. Internet Technology and

    Secured Transactions, and UBICC.

    He published many books and journal papers, and participated as speaker in many conferences worldwide.

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    4/82

    4

    UUUBBBIIICCCCCC JJJooouuurrrnnnaaalll

    Volume 4, Number 3, July 2009

    SPECIAL ISSUE ON ICIT 2009 CONFERENCE: APP LIED COMPUTING

    538 The influence of blended learning model on developing leadership skills

    of school administratorsTufan Ayta

    544 Towards the implementation of temporal-based software version

    management in Universiti Darul Iman Malaysia

    Mohd Nordin Abdul Rahman, Azrul Amri Jamal, Wan Dagang Wan Ali

    551 Effective digital forensic analysis of the NTFS disk image

    Mamoun Alazab, Sitalakshmi Venkatraman, Paul Watters

    559 Job and application-level scheduling in distributed computingVictor V. Toporkov

    571 Least and greatest fixed points of a w hile semantics function

    Fairouz Tchier

    585 Case studies in thin client acceptance

    Paul Doyle, Mark Deegan, David Markey, Rose Tinabo, Bossi Masamila, David Tracey

    599 An interactive composition of workflow applications based on UML

    activity diagram

    Vousra Bendaly Hlaoui, Leila Jemni Ben Ayed

    609 How to map perspectives

    Gilbert Ahamer

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    5/82

    THE INFLUENCE OF BLENDED LEARNING MODEL ON DEVELOPINGLEADERSHIP SKILLS OF SCHOOL ADMINISTRATORS

    Tufan AYTAThe Ministry of National Education, Ankara, TURKEY

    [email protected]

    The usage of b-learning approach on in-service education activities in Turkish education system aregetting more and more important these days. Generally, traditional education and computer basededucation applications are used on in-service education activities. Blended learning (b-learning)combines online learning with face-to-face learning. The goal of blended learning is to provide the mostefficient and effective learning experience by combining learning environments. The purpose of this

    research is to find out the effect of b-learning approach on developing administrators leadership skills.To identify what the school administrators educational needs and to know their existing leadershipskills, needs assessment questionnaire was applied to 72 school administrators who were selected from33 primary schools in 11 region of Ankara capital city. According to the descriptive statistical analysisresults of questionnaire, in-service training programme was prepared for the development of schooladministrators leadership skills. The school administrators were separated into three groups ascomputer based learning (CBL) (25 participants), blended learning (BL) (23 participants) andtraditional learning (TL) (24 participant) groups. These groups were trained separately with these threedifferent learning environments by using the in-service training programme. According to the results of

    pre-test, post test and achievements score means, it was observed that BL groups score is the highestwhen compared to TL and CBL groups. As a result of this research, in terms of achievements andeffectiveness, b-learning was found to be the most effective learning environment when compared to theothers. Both learners and tutors findings strongly suggest that blended learning is available alternativedelivery method for inservice education activities.1

    1 This research project article has been supported by The Scientific and Technological Research Council of Turkey (TBTAK) (SOBAG 1001

    Programme).

    Keywords: Blended Learning, e-Learning, Information Technology, In-service education

    1 INTRODUCTIONBlended Learning (b-Learning or Hybrid Learning) consists

    of the combination of e-Learning and traditional educationapproach. Blended learning combines online learning withface-to-face learning. The goal of blended learning is to

    provide the most efficient and effective learning experience by combining different learning environments. b-Learningstands in the forefront in respect of interactivity with targetlearner group, enriching learning process and integration oftechnology into education [1,2,3,16,21].

    E-learning has had an interesting impact on the learningenvironment. Blended learning is the most logical and naturalevolution of our learning agenda. It suggests an elegantsolution to the challenges of tailoring learning anddevelopment. It represents an opportunity to integrate the

    innovative and technological advances offered by onlinelearning with the interaction and participation offered in the

    best of the traditional learning [20].

    The ground of blended learning approach constitutes the powerfull side of traditional education and computer based

    educations instead of using one or the other on its own.

    Basic characteristics of Blended learning which reflectsthe values of 21st century education are [2];

    Providing a new way of learning and teaching, Teaching how to learn, Creating digital learners, Be more economical, Focusing on technology and communication Improving project-based learning, And improving teaching process.

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    6/82

    Blended Learning practices provide project based learningopportunities for active learning and interaction amonglearners and especially provides as a way to meet theeducational needs of the learners. Blended learning programsmay include several forms of learning tools, such as real-time

    virtual/collaboration software, self-paced web-based courses,electronic performance support systems (EPSS) embeddedwithin the learning-task environment, and knowledgemanagement systems. Blended learning contains variousevent-based activities, including face-to-face learning, e-learning, and self-paced learning activities. Blended learningoften occurs as a mixture of traditional instructor-led training,synchronous online training, asynchronous self-paced study,and structured task based training from a teacher or mentor.The aim of blended learning is to combine the best ofclassroom face-to-face learning experiences with the best ofonline learning experiences. Overall, blended learning refersto the integration (or the so-called blending) of e-learningtools and techniques with traditional face-to-face teachingdelivery methods. The two important factors here are the timespent on online activities and the amount of technologyutilized, see Concept of Blended Learning figure 1 below:[3,4,6,7,8,9,10,11,12,15,16,19].

    Fig. 1: Concept of Blended Learning

    If two or more of these learning environments which arestated above are used to teach an educational objective, it can

    be said that blended learning is realized. However blended

    learning has more meaning than showing a web page duringa lesson in the classroom and using information immediatelyin the web page to explain the lesson. Blended learning is alearning of environment which combines environments offace to face learning and web-based distance learning.

    Blended learning overcomes this limitation of an e-learning only approach [12]. Today blended learning

    primarily functions as a replacement for extension of face-toface environments. For instance, it might be used to fosterlearning communities, extend training events, offer follow-upresources in a community of practice, access guest experts,

    provide timely mentoring or coaching, present online lab orsimulation activities, and deliver prework or supplemental

    course materials. While such uses may be unique andengaging, they are not exactly novel [13].

    Figure 2: A Blend of Learning Theories

    By applying learning theories of Keller, Gagn, Bloom,Merrill, Clark and Gery, (see Figure 2) five key ingredientsemerge as important elements of a blended learning process(see Figure 2):1. Live Events: Synchronous, instructor-led learning eventsin which all learners participate at the same time, such as in alive virtual classroom.2. Self-Paced Learning: Learning experiences that thelearner completes individually, at his own speed and on hisown time, such as interactive, Internet-based or CD-ROMtraining.

    3. Collaboration: Environments in which learnerscommunicate with others, for example, e-mail, threadeddiscussions or online chat.4. Assessment: A measure of learners knowledge. Pre-assessments can come before live or self-paced events, todetermine prior knowledge, and post-assessments can occurfollowing live or self-paced learning events, to measurelearning transfer.5. Performance Support Materials: On-the-job referencematerials that enhance learning retention and transfer,including PDA downloads, and printable references,summaries, and job aids.

    2 PURPOSE

    The purpose of this research is to find out the effects of b-learning approach on developing school administratorsleadership skills.

    3 RESEARCH DESIGN

    To determine what the school administrators educationalneeds on leadership skills, needs assessment questionnairewas applied to 72 school administrators who were selectedfrom 33 primary schools in 11 regions within Ankara capitalcity. According to the results of this questionnaire, in-service

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    7/82

    training programme on developing school administratorsleadership skill was prepared.

    The most needed leadership skills of school administratorsaccording to the results of needs assessment were determinedas human relations in administration, basic managementskills for school principles, job satisfaction at organizationsand motivation.

    After that, content and learning activities of "SchoolAdministrators Leadership Skills Development In-serviceProgramme" were prepared. Beside that course notes astraining materials were prepared to be distributed to the

    participants in the form of CDROM and printed documents.

    The school administrators were separated into three groupsas Computer Based Learning (CBL) (25 participants),Blended Learning (BL) (23 participant) and TraditionalLearning (TL) (24 participant) groups. These groups weretrained according to three different methods by preparing

    education programme. The groups were given two dayscourse.

    Before the in-service training the school administratorswho were in BL group reached the digital content and studiedlearning activities included in "School AdministratorsLeadership Skills Development In-service Programme" whichis prepared by using Moodle Learning Managing SystemSoftwware and published on http://beg.meb.gov.tr:8088/website.

    The school administrators who are in the BL group wereentered to the http://beg.meb.gov.tr:8088/ webpage by usingtheir usernames and passwords given to them three weeks

    ago, before the in-service training. The interface of thewebsite is shown in the Fig. 2. The school administrators inthis group shared information, chatted, and studied activitieswith their colleagues and subject area specialist about thecontent and learning activities included in the site wheneverthey want. As online learner, school administrators buildtheir confidence and learning processes as they get used toworking independently online. Blended learning activitiesincluded online knowledge gathering and construction inteams or groups, publishing of electronic content, interactiveelements like online brainstorming, discussion, several formsof feedback, evaluation and assessment, as well as other

    blended learning techniques. Lecturers posted messages to the

    BL group as a whole and to each administrators individuallyto meet their need for support. They posted explanation toguide learners in more complex tasks, encouraged them tocommunicate, to do their individual assignments, and to usethe Moddle platform tools. They have at their disposal tofacilitate their work. Tutors controlled and marked the onlineassignments, filled in learners performance reports, andwrite feedback on their performance in their online portfolios.Lecturers followed school administrators learningimprovements and gave encouragement when motivationlevel began to falter. And after that this group was trained bylecturer as subject area specialist. Lecturer trained this group

    by using face to face education, computer based education andonline training website prepared by moodle software.

    Figure. 3: The Moodle interface

    On the other hand; all the in-service training content andactivities were taught to CBL group by lecturer with aid ofcomputer and projector. Finally, TL group was trained in atraditional way by using blackboard

    Multiple choice test which was made up of 20 questionswere applied to the groups to investigate their achievementson leadership skills. This test was shown to content experts toidentify its content validity. To find out the statisticalsignificant difference among three groups score means, one-way Anova and Scheffe test were used. This test was appliedto all groups as pre-test at the beginning and as post-test atthe end of in-service training [5]. Blended Learning Modelwhich was used on the research process showed Figure 3.

    Figure. 3: The Process of Blended Learning Model

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    8/82

    4 RESULTS

    When three groups pre-test score means werecompared, it was seen that there were significant differencesamong them (F (2-69)=53,350, p

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    9/82

    We observed that b-Learning opportunities for teachingobjectives make learning entertaining, funny, lasting andeconomics as an effective way. In this sense, according to ustrainers should use b-Learning environment for theintegration of ICT effectively in learning and teaching.

    Last year, the Turkish Ministry of National Education In-service Training Department implemented more than 700 in-service training courses. The usage of b-Learningmethodology especially in these in-service trainings willenrich and support the learning-teaching process of those in-service training. More projects about the usage of b-Learningin-service training should be supported and performed.

    Particularly, the initiatives of the Turkish Ministry of National Education for improving schools informationtechnologies and internet infrastructure, distributingauthoring software to the teachers, developing education

    portal and its content, moodle and similar learningmanagement system software should be used for supporting b-learning usage in-service training. School administratorsstate that b-learning approaches will be used more effectivelyin the class. All school administrators comments regardingthe blended course were positive.

    It is cited as below that the positives of the blended learningcourse activities which are used at this research;- Improvement in the quantity and/or quality of thecommunications among the school administrators indiscussion board or online groups and face to face activities inthe classroom.- Good cooperative learning activities-

    Blended learning were more effective than classroomalone. Higher learner value and impact; the effectivenessgreater than for nonblended approaches. Learners like b-learning appraoches.- Accessibility to b-learning content and activities rapidly(every time, everywhere)- Improved relationships between tutors and students- The immediate feedback that could be given schooladministrators- Flexibility in scheduling and timetabling of course work.- An increase of the time actually spent on face-to-face inclassroom- Cost effectiveness for both the accrediting learninginstitution and the learner

    The increased cost, reduced training time, and the ability toeasily update training materials offer additional compellingreasons for educators to embrace blended learning [22].

    At the research there are some problems according to schooladministrators opinions cited as below:- Some technical web, internet problems access to moddle

    platform.- The failure of online Power Point presentation of lecturematerial to meet some school administrators expectations.- Some school administrations lack of enthusiasm being in a

    blended learning course.

    - Limited knowledge in the use of technology.

    - Blended learning takes time for both the instructor andlearner to adapt to this relatively new concept in deliveringinstruction.

    Especially, it can be concluded that all the in-servicetraining should be taught more effectively by using b-

    Learning approach. The technological leadership role of theschool administrations is very important for the success of b-Learning approach.

    The feature of blended learning models has a vitalimportance for applying individual learning and activelearning. According to some authors a blend is integratedstrategy to delivering on promises about learning and

    performance [17].In sum, both learners and tutors findings strongly suggest

    that blended learning is available alternative delivery methodfor courses. In supporting blended learning, especially in-service education courses remains both a national leader inthe effective use of technology for teaching and learning, anda pioneer in identifying the right mix of face-to-face andonline communication practices that will enhance learningeffectiveness [19]. The result of this research backs up all ofthese. To develop the technological leadership of schooladministrators, b-learning approaches should be usedeffectively. Blended learning offers opportunities for both in-service school administrators, in-service teachers and theirlearners.

    REFERENCES

    [1]

    Ayta, T. Eitimde Biliim Teknolojileri. Asil YaynDatm, pp. 48-53 (2006).[2] Ayta, T. The Influence of B-Learning Model on

    Developing Leadership Skills of EducationAdministrators Research Education Programme, pp.48-53. (2006).

    [3] Singh, H. Building Effective Blended LearningPrograms, Educational Technology, Vol. 43, Number6, pp. 51-54, November December, (2003).

    [4] Oliver, M. ve Trigwell, K. Can Blended Learning BeRedeemed?. E-Learning, Vol.2. Number 1, pp. 17,(2005).

    [5] Bykztrk, . Sosyal Bilimler in Veri Analizi ElKitab. statistik, Aratrma deseni SPSSUygulamalar ve Yorum, 8. Bask, PegemA Yaynclk,Pp: 40-53, Ankara, (2007).

    [6] Bonk, C. J.; Olson, T. M.; Wisher, R. A. and Orvis, K. L.Learning from Focus Groups: An Examination ofBlended Learning, Journal of Distance Education.Vol. 17, No 3. pp. 100. (2002).

    [7] Marsh, J. How to Design Effective Blended Learning.www.brandon-hall.com. Eriim tarihi: 15 February 2009.

    [8] Orhan, F. Altnk, S. A. and Kablan, Z. Karmarenme (Blended Learning) Ortamna Dayal Bir

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    10/82

    Uygulama: Yldz Teknik niversitesi rnei, IV.Uluslararas Eitim Teknolojileri Sempozyumu, 24-26Kasm 2004, Sakarya, Vol: 1, pp.646-651, (2004).

    [9] Dracup, Mary. "Role Play in Blended Learning: A CaseStudy Exploring the Impact of Story and Other Elements,Australasian Journal of Educational Technology,24(3), pp.294-310, (2008).

    [10]Cooper, G. and Heinze, A. "Centralization ofAssessment: meeting the challenges of Multi-year TeamProjects in Information Systems Education." Journal ofInformation Systems Education, 18, 3, pp.345 356,(2007).

    [11]Heinze, A. Lecturer in Information Systems,http://www.aheinze.me.uk/Blended_Learning_Higher_Education.html, Eriim tarihi: 15 February 2009.

    [12]Langley, Amanda. Experiential Learning, E-Learningand Social Learning: The EES Approach to DevelopingBlended Learning The Fourth Education in aChanging Environment Conference Book, Edited byEamon ODoherty, nforming Science Press, pp.171-172,(2007).

    [13]Bonk, C. J. & Graham, C. R. (Eds.). Future Directionsof Blended Learning In Higher Education andWorkplace Learning Settings Handbook of blendedlearning: Global Perspectives, local designs. SanFrancisco, CA: Pfeiffer Publishing. (2004).

    [14]Carman, Jared M. Blended Learning Design: Five KeyIngredients, Director, Product DevelopmentKnowledgeNet, October 2002 www.brandon-hall.com.Eriim tarihi: 15 February 2009.

    [15]Derntl M. Motschnig-Pitrik, Renate. A LayeredBlended Learning Systems Structure, Proceedings of I-KNOW 04 Graz, Austria, June 30 - July 2, (2004).

    [16]Baadosa, Emerita. Blended-learning PedagogicalModel for Teaching and Learning EFL SuccessfullyThrough an Online Interactive Multimedia Environment,CALICO Journal, Vol. 23, No. 3, p-p 533-550, (2006).

    [17]Rosset, A., Douglis, F. &Frazee, R. V. Strategies forbuilding blended learning. Learning Circuits.Retrieved August 13, 2007, fromhttp://www.learningcircuits.org/2003/jul2003/rossett.htm.

    [18]Brandl, K. (2005). Are you ready to moodle?. Language,Learning & Technology, Vol. 9, No. 2, pp. 16-23, May(2005).

    [19]Blended Learning Pilot Project, Final Report for 2003-2004 and 2004-2005 Rochester Institute of Technology.(2004). Blended Learning Pilot Project: Final Reportfor the Academic Year 2003 2004. Retrieved Feb 5,,fromhttp://distancelearning.rit.edu/blended/Files/Blende

    dPilotFinalReport2003_04.pdf. (2009).

    [20]Thorne, K. Blended Learning: How to IntegrateOnline and Traditional Learning. United States, KoganPage, (2004).

    [21]Rovai, Alfred P. and Jordan, Hope M. "Blended Learningwith Traditional and Fully Online Graduate Courses."

    International Review of Research in Open andDistance Learning. 2004. Retrieved Sept 27, fromhttp://www.irrodl.org/content/v5.2/rovaijordan.html.(2008)

    [22]G. Thorsteinsson and T. Page. Blended LearningApproach to Improve, In-Servce Teacher Education InEurope Through The Fste Comenus 2.1. Project Ict inEducation: Reflections and Perspectives, Bucharest,June 14-16, (2007).

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    11/82

    TOWARDS THE IMPLEMENTATION OF TEMPORAL-BASED

    SOFTWARE VERSION MANAGEMENT AT UNIVERSITI DARUL IMAN

    MALAYSIA

    M Nordin A Rahman, Azrul Amri Jamal and W Dagang W AliFaculty of Informatics

    Universiti Darul Iman Malaysia, KUSZA Campus

    21300 K Terengganu, Malaysia

    [email protected], [email protected], [email protected]

    ABSTRACT

    Integrated software is very important for the university to manage day-to-day operations. This integrated software is

    going through evolution process when changes are requested by the users and finally the new versions are created.

    Software version management is the process of identifying and keeping track of different versions of software.

    Complexity level of this process would become complicated should software was distributed in many places. This

    paper presents a temporal-based software version management model. The model is purposely implemented for

    managing software versions in Information Technology Centre, Universiti Darul Iman Malaysia. Temporal elements

    such as valid time and transaction time are the main attributes considered, to be inserted into the software versionmanagement database. By having these two attributes, it would help the people involved in software process to

    organize data and perform monitoring activities with more efficient.

    Keywords: version management, temporal database, valid time, transaction time.

    1. INTRODUCTION

    Software evolution is concerned with modifying

    software once it is delivered to a customer. Software

    managers must devise a systematic procedure to

    ensure that different software versions may be

    retrieved when required and are not accidentally

    changed. Controlling the development of differentsoftware versions can be a complex task, even for a

    single author to handle. This task is likely to become

    more complex as the number of software authors

    increases, and more complex still if those software

    authors are distributed geographically with only

    limited means of communication, such as electronic

    mail, to connect them.

    Temporal based data management has been a hot

    topic in the database research community since the

    last couple of decades. Due to this effort, a large

    infrastructure such as data models, query languages

    and index structures has been developed for the

    management of data involving time [11]. Nowadays,

    a number of software has adopted the concepts of

    temporal database management such as artificial

    intelligence software, geographic information systems

    and robotics. Temporal management aspects of any

    objects could include:

    The capability to detect change such as theamount of change in a specific project or object

    over a certain period of time.

    The use of data to conduct analysis of past eventse.g., the change of valid time for the project or

    version due to any event.

    To keep track of all the transactions status on theproject or object life cycle.

    Universiti Darul Iman Malaysia (UDM) is the

    first full university at East Cost of Malaysia located atthe state of Terengganu. It was setup on 1

    stJanuary

    2006. UDM has two campus named as KUSZA

    Campus and City Campus. Another new campus

    known as Besut Campus will be operated soon. To

    date, KUSZA Campus has six faculties and City

    Campus has three faculties. The university also has

    an Information Technology Centre (ITC-UDM) that

    purposely for developing and maintaining the

    university information systems and information

    technology infrastructure.

    In this paper, we concentrate on the modelling of

    a temporal-based software version management.

    Based on the model, a simple web-based webapplication has been developed and suggested to be

    used by ITC-UDM. The rest of the paper is organized

    as follows: next section reviews the concept of

    temporal data management. Section 3 discusses on

    the current techniques in software version

    management. Current issues in software version

    management at ITC-UDM are discussed in Section 4.

    The specifications of the proposed temporal-based

    software version management model are explained in

    Section 5. Conclusion is placed in Section 6.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 544

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    12/82

    2. TEMPORAL DATA CONCEPT

    To date, transaction time and valid time are the two

    well-known of time that are usually considered in the

    literature of temporal database management [2, 4, 6,

    9, 10, 11, 12]. The valid time of a database fact is thetime when the fact is true in the miniworld [2, 6, 9,

    10]. In other words, valid time concerns the

    evaluation of data with respect to the application

    reality that data describe. Valid time can be

    represented with single chronon identifiers (e.g.,

    event time-stamps), with intervals (e.g., as interval

    time-stamps), or as valid time elements, which are

    finite sets of intervals [9]. Meanwhile, the transaction

    time of a database fact is the time when the fact is

    current in the database and may be retrieved [2, 6, 9,

    10]. This means, that the transaction time is the

    evaluation time of data with respect to the system

    where data are stored. Supporting transaction time is

    necessary when one would like to roll back the state

    of the database to a previous point in the time. [9]

    proposed four implicit times could be taken out from

    valid time and transaction time:

    valid timevalid-from and valid-to transaction time transaction-start and

    transaction-stop

    Temporal information can be classified into two

    divisions; absolute temporal and relative temporal [9].

    Most of the research in temporal databases

    concentrated on temporal models with absolute

    temporal information. To extend the scope oftemporal dimension, [12] presented a model which

    allows relative temporal information e.g., event A

    happened before event B and after January 01, 2003.

    [12] suggests several temporal operators that could be

    used for describing the relative temporal information:

    {equal, before, after, meets, overlaps, starts, during,

    finishes, finished-by, contains, started-by, overlapped-

    by, met-by and after}.

    In various temporal research papers the theory of

    time-element can be divided into two categories:

    intervals and points [6, 9, 11]. If T is denoted a

    nonempty set of time-elements and d is denoted a

    function from T to R+, the set of nonnegative realnumbers then:

    otherwise,

    0d(t)if,telement,_time

    point

    interval

    According to this classification, the set of time-

    elements, T, may be expressed as T = I P, where I

    is the set of intervals and P is the set of points.

    3. RELATED TOOLS IN SOFTWARE VERSION

    MANAGEMENT

    In distributed software process, a good version

    management combines systematic procedures and

    automate tools to manage different versions in manylocations. Most of the methods of version naming use

    a numeric structure [5]. Identifying versions of the

    system appears to be straightforward. The first

    version and release of a system is simply called 1.0,

    subsequent versions are 1.1, 1.2 and so on.

    Meanwhile, [3] suggests that every new version

    produced should be placed in a different directory or

    location from the old version. Therefore, the version

    accessing process would be easier and effective.

    Besides that, should this method be implemented

    using a suitable database management system, the

    concept of lock access could be used to prevent the

    occurrence of overlapping process. Present, there are

    many software evolution management tools available

    in market. Selected tools are described as follows:

    Software Release Manager (SRM) SRM is afree software and supported on most UNIX and

    LINUX platforms. It supports the software

    version management for distributed

    organizations. In particular, SRM tracks

    dependency information to automate and

    optimize the retrieval of systems components as

    well as versions.

    Revision Control System (RCS) RCS uses theconcepts of tree structures. Each branch in the

    tree represents a variant of the version. Thesebranches will be numbered by an entering

    sequence into a system database. RCS records

    details of any transaction made such as the

    author, date and reason for the updating.

    Change and Configuration Control (CCC) CCC is one of the complete tools for software

    configuration management. It provides a good

    platform for an identification, change control and

    status accounting. CCC allows a simultaneously

    working for a same version via virtual copies.

    This can be merged and changes can be applied

    across configurations.

    Software Management System (SMS) SMSallows all the aspects in software configuration

    management such as version control, workspace

    management, system modelling, derived object

    management, change detection in the repository

    etc. SMS possesses the desired characteristics,

    providing resources of version control of systems

    and having a good user interface.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 545

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    13/82

    4. THE SOFTWARE VERSION MANAGEMENT

    ISSUES IN ITC-UDM

    There are three divisions have been formed at ITC-

    UDM. These divisions and their function are as

    follows:

    Infrastructure and Application Systems (AIS) to develop and maintain the university software;

    maintain the university computer networking;

    Technical and Services (TS) to support themaintenance of information technology

    hardware, training, multimedia services and help

    desk.

    Administration and Procurement (AP) - tomanage the daily operation of ITC-UDM such as

    administration, procurement etc.

    Each division is headed by a division leader and

    supported by several information technology officers,

    assistant information technology officers and

    technicians. All the university software modules are

    developed and maintained by AIS Division. Figure 1

    depicts the main software modules managed by the

    ITC-UDM. There are over thousands source code

    files are produced by the division. Therefore, it is not

    easy for the division to manage all those artefacts.

    Figure 1: University Software Modules

    From study done by the authors, two main

    weaknesses have been found in the current approachfor ITC-UDM in managing all versions of source

    codes produced:

    Non systematic procedure used for managingsoftware versions and it is difficult to recognize

    the valid time for each version.

    The current approach does not consider theaspect of relative temporal in representing the

    valid time for each version.

    The current approach maintains only the conceptof current view version of which an existing

    version will be overwritten by a new incoming

    version during the process of an update.

    Based on the mentioned problems, we stronglybelieve that the development of temporal-based

    software version management tool for ITC-UDM

    could gain the following benefits:

    To support project and software managers inplanning, managing and evaluating version

    management.

    Assigning timestamps (absolute and relative) toeach transaction will provide transaction-time

    database functionality, meaning to retain all

    previously current database state and making

    them available for time-based queries.

    To increase the effectiveness and efficiency ofthe collaborative software version management

    process.

    5. THE MODEL

    Version control is one of the main tasks in software

    configuration management. For any software version

    would have its own valid time. The collection of

    software versions should be organized into systematic

    way for the purpose of retrieval efficiency and to

    recognize valid time of those versions. Besides the

    used of unique sign for the associate version, the

    method of time-stamping is also needed to be

    embedded into the version management database.

    5.1 The Temporal-Based Version Management

    Specifications

    Temporal elements involved in the model are

    transaction time (tt), absolute valid time (avt) and

    relative valid time (rvt) which can be denoted as, TE

    = {tt, avt, rvt}. Transaction time is a date-stamping

    and it represents a transaction when a new valid time

    for a version is recorded into the application database.

    Absolute valid time is represent by two different

    attributes known as valid-from and valid-until and it

    also using an approach of date-stamping. Meanwhile,

    relative valid time which involves a time interval, willbe represented by a combination of temporal

    operators, OPERATORs = {op1, op2, op3, , opn}

    and one or more defined event(s), signed as EVENTs

    = {event1, event2, event3, , eventn}. This model,

    considered only five temporal operators, hence will be

    denoted as OPERATORs = {equal, before, after,

    meets, met_by}. Table 1 illustrates the general

    definitions of temporal operators based on time

    interval and time points. Figure 2 shows the

    Academic Module

    Human Resource Module

    Student Affairs Module

    Finance Module

    Department of DevelopmentUniversitySoftware

    Module

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 546

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    14/82

    organization of temporal elements that involved in

    software version management. If we have a software

    with a set of version signed as, V = {v1, v2, v3, , vn}

    then the model is:

    TEMPORAL(vi V) (tt avt rvt)

    where,

    avt = [avt-from, avt-until],

    rvt = [rvt-from, rvt-until],

    rvt-from = {{opi OPERATORs} {eventi

    EVENTs}} and,

    rvt-until = {{opi OPERATORs} {eventi

    EVENTs}}.

    Thus, if the software that has a set of feature attributes

    Ai then a complete scheme for a temporal-based in

    software version management can be signed as:

    S = {A1, A2, A3, , An, tt, avt-from, avt-until, rvt-

    from, rvt-until}

    where, Ai = attribute name of a version, tt P and,

    avt-from, avt-until, rvt-from and rvt-until T.

    Table 2 exhibits the temporal-based version-

    record management for representing KEWNETs

    software version history. For example, KEWNET

    Ver. 1.1 has been updated three times. For the first

    time, the version has been recorded on tt3 with

    absolute valid time is from avf2 to avu3 and relative

    valid time is from rvf2 to rvu3. For the second

    updated, on tt4, absolute valid time is from avf2 to

    avu4 and relative valid time is from rvf2 to rvu4. The

    version has another change request and therefore the

    version would have a new absolute valid time from

    avf2 to avu5 and relative valid time from rvf2 to rvu5.

    This transaction is recorded on tt5.

    Table 1: The definitions of temporal operator base on time point and time interval

    Temporal Operator Time Point Time Interval

    equal t = {(t = ti) T} ={( = i) T}

    before = {( < ti) T} = {( < i) T}

    after = {( > ti) T} = {(i) T}

    meets = {( ti) T} = {(i) T}

    met_by = {( ti) T} = {(i) T}

    Figure 2: Temporal elements in software version management

    Transaction time

    Software

    version

    Absolute

    Until

    Valid time

    From Until

    Relative

    From

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 547

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    15/82

    Table 2: Version-Record for KEWNET software

    Ver # tt avt-from avt-until rvt-from rvt-until

    1.0 tt1 avf1 avu1 rvf1 rvu1

    1.0 tt2 avf1 avu2 rvf1 rvu2

    1.1 tt3 avf2 avu3 rvf2 rvu31.1 tt4 avf2 avu4 rvf2 rvu4

    1.1 tt5 avf2 avu5 rvf2 rvu5

    1.2 tt6 avf3 avu6 rvf3 rvu6

    1.2 tt7 avf3 avu7 rvf3 rvu7

    2.0 tt8 avf4 avu8 rvf4 rvu8

    2.0 tt9 avf4 avu9 rvf4 rvu9

    2.1 tt10 avf5 avu10 rvf5 rvu10

    5.2 The Temporal-Based Version Management

    Functionality

    To carry out experiments validating the model

    proposed, a client-server prototype has been

    developed. The prototype has three main modules:register version, update the version valid time and

    queries.

    During the register version process the software

    manager needs to record the foundations information

    of the software version. Attributes that needed to be

    key-in by software manager can be signed as, Av =

    {version code, date release, version description,origin version code, version id}. Figure 3 illustrates

    the screen sample used to register the basic

    information of the software version.

    Figure 3: Register the software version

    On completion of new software version

    registration, then the software manager needs to

    update its valid time and this can be done by using the

    module update the version valid time, illustrated in

    Figure 4. The attributes for this module formed as AT

    = {version code, transaction date, description, date

    start, date end, time start, time end, update by,

    position}. Attribute transaction date is the current

    date and will be auto-generated by the server.

    Any changes of a software version valid time,

    software manager needs to update by using this form.

    The tool also allows the user to make a query to the

    database. The users can browse the version valid time

    and status for any registered software as shown in

    Figure 5. Meanwhile, Figure 6 shows the output form

    of query for all histories of valid time and status for a

    software version.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 548

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    16/82

    Figure 4: Update the software version valid time

    Figure 5: The software version valid time report

    Figure 6: The transaction records of a version

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 549

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    17/82

    6. CONCLUSION

    In practical software version management, it is

    frequently important to retain a perfect record of past

    and current valid time for a version states. We cannot

    replace or overwritten the record of old valid time of asoftware version during the updating process. Hence,

    this paper introduces a new model in software version

    management based on temporal elements. Here, an

    important issue discussed is temporal aspects such as

    valid time and transaction time have been stamped on

    each software version so that the monitoring and

    conflict management processes can be easily made.

    Based on the proposed model, a prototype has

    been developed. The prototype will be experimented

    in ITC-UDM. It will be used to monitor and keep

    track the evolution of the software version, systems

    module and software documents in universitys

    software. For further improvements, currently, we are

    investigating related issues including combining the

    model with change request management, considering

    more temporal operators and developing a standard

    temporal model for all configuration items in software

    configuration managements.

    References:

    [1] Bertino, E., Bettini, C., Ferrari, E. and Samarati,

    P. A Temporal Access Control Mechanism for

    Database Systems, IEEE Trans. On Knowledge

    and Data Engineering, 8, 1996, 6779.

    [2] C. E. Dyreson, W. S. Evans, H. Lin and R. T.Snodgrass, Efficiently Supporting Temporal

    Granularities, IEEE Trans. On Knowledge and

    Data Engineering, Vol. 12 (4), 2000, 568587.

    [3] G. M. Clemm. Replacing Version Control With

    Job Control, ACM Proc. 2nd

    Intl. Workshop

    On Software Configuration Management, 1989,

    162169.

    [4] D. Gao, C. S. Jensen, R. T. Snodgrass and M. D.

    Soo, Join Operations in Temporal Databases,

    The Very Large Database Journal, Vol. 14,

    2005, 229.

    [5] A. Dix, T. Rodden, and I. Sommerville.

    Modelling Versions in Collaborative Work,

    IEE Proc. Software Engineering, 1997, 195

    206.

    [6] H. Gregerson, and C. S. Jensen, Temporal

    Entity-Relationship Models A Survey, IEEE

    Trans. On Knowledge and Data Engineering,

    11, 1999, 464497.

    [7] A. Gustavsson. Maintaining the Evaluation of

    Software Objects in an Integrated

    Environment, ACMProc. 2nd

    Intl. Workshop

    On Software Configuration Management, 1989,

    114117.

    [8] A. Havewala. The Version Control Process:

    How and Why it can save your project, Dr.

    Dobbs Journal. 24, 1999, 100111.

    [9] C. S. Jensen and R. T. Snodgrass. Temporal

    Data Management, IEEE Trans. on Knowledge

    and Data Engineering. 11, 1999, 3644.

    [10] K. Torp, C. S. Jensen and R. T. Snodgrass,

    Effective Timestamping in Database, The

    Very Large Database Journal, Vol. 8, 1999, 267

    288.

    [11] B. Knight, and J. Ma. A General Temporal

    Theory, The Computer Journal, 37, 1994, 114

    123.

    [12] B. Knight and J. Ma. A Temporal DatabaseModel Supporting Relative and Absolute Time,

    The Computer Journal. 37, 1994, 588597.

    [13] A. Lie. Change Oriented Versioning in a

    Software Engineering Database, ACM Proc.

    2nd

    Intl. Workshop on Software Configuration

    Management. 1989, 5665.

    [14] H. Mary. Beyond Version Control, Software

    Magazine. 16, 1996, 4547.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 550

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    18/82

    EFFECTIVE DIGITAL FORENSIC ANALYSISOF THE NTFS DISK IMAGE

    Mamoun Alazab, Sitalakshmi Venkatraman, Paul Watters

    University of Ballarat, Australia{m.alazab, s.venkatraman, p.watters} @ballarat.edu.au

    ABSTRACTForensic analysis of the Windows NT File System (NTFS) could provide usefulinformation leading towards malware detection and presentation of digitalevidence for the court of law. Since NTFS records every event of the system,forensic tools are required to process an enormous amount of information relatedto user / kernel environment, buffer overflows, trace conditions, network stack, etc.This has led to imperfect forensic tools that are practical for implementation andhence become popular, but are not comprehensive and effective. Many existingtechniques have failed to identify malicious code in hidden data of the NTFS diskimage. This research discusses the analysis technique we have adopted to

    successfully detect maliciousness in hidden data, by investigating the NTFS bootsector. We have conducted experimental studies with some of the existing popularforensics tools and have identified their limitations. Further, through our proposedthree-stage forensic analysis process, our experimental investigation attempts tounearth the vulnerabilities of NTFS disk image and the weaknesses of the currentforensic techniques.

    Keywords: NTFS, forensics, disk image, data hiding.

    1 INTRODUCTION

    Digital forensics is the science of identifying,extracting, analyzing and presenting the digitalevidence that has been stored in the digital electronic

    storage devices to be used in a court of law [1, 2, 3].While forensic investigation attempts to provide fulldescriptions of a digital crime scene, in computersystems, the primary goals of digital forensicanalysis are fivefold: i) to identify all the unwantedevents that have taken place, ii) to ascertain theireffect on the system, iii) to acquire the necessaryevidence to support a lawsuit, iv) to prevent futureincidents by detecting the malicious techniques usedand v) to recognize the incitement reasons andintendance of the attacker for future predictions [2,4]. The general component in digital forensic processare; acquisition, preservation, and analysis [5].

    Digital electronic evidence could be described as

    the information and data of investigative value thatare stored by an electric device, such evidence [6].This research focuses on the abovementioned thirdgoal of acquiring the necessary evidence ofintrusions that take place on a computer system. Inparticular, this paper investigates the digital forensictechniques that could be used to analyze and acquireevidences from the most commonly used file systemon computers, namely, Windows NT File System(NTFS).

    Today, NTFS file system is the basis of

    predominant operating systems in use, such asWindows 2000, Windows XP, Windows Server 2003,Windows Server 2008, Windows Vista, Windows 7and even in most free UNIX distributions [7, 8, 9].Hence, malware writers try to target on NTFS as this

    could result in affecting more computer users.Another compelling reason for witnessing a strongrelationship between computer crime and the NTFSfile system is the lack of literature that unearth thevulnerabilities of NTFS and the weaknesses of thepresent digital forensic techniques [10]. This paperattempts to fill this gap by studying the techniquesused in the analysis of the NTFS disk image. Ourobjectives are i) to explore the NTFS disk imagestructure and its vulnerabilities, ii) to investigatedifferent commonly used digital forensic techniquessuch as signatures, data hiding, timestamp, etc. andtheir weaknesses, and iii) finally to suggestimprovements in static analysis of NTFS disk image.

    2 FORENSIC ANALYSIS PROCESS

    In this section, we describe the forensic analysisprocess we had adopted to achieve the abovementioned objectives of this research work. Weconducted an empirical study using selected digitalforensic tools that are predominantly used in practice.Several factors such as effectiveness, uniqueness androbustness in analyzing NTFS disk image wereconsidered in selecting the tools / utilities required

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 551

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    19/82

    for this empirical study. Since each utility doessome specific functionality, a collection of such toolswere necessary to perform a comprehensive set offunctionalities. Hence, the following forensicutilities / tools were adopted to conduct theexperimental investigation in this research work:

    i) Disk imaging utilities such as dd [11] ordcfldd V1.3.4-1 [12] for obtaining sector-by-sector mirror image of the disk;

    ii) Evidence collection using utilities such asHexedit [13], Frhed 1.4.0[14] and StringsV2.41[15] to introspect the binary code ofthe NTFS disk image;

    iii) NTFS disk analysis using software toolssuch as The Sleuth KIT (TSK) 3.01[16] andAutopsy [17] and NTFSINFO v1.0 [18] toexplore and extract intruded data as well ashidden data for performing forensic analysis.

    For the experimental investigation of theeffectiveness of the above tools, we created test dataon a Pentium (R) Core (TM) 2 Due CPU, 2.19 GHz,2.98 of RAM with Windows XP professional thatadopts the NTFS file system partition. In this pilotempirical study, we focused on the boot sector of theNTFS disk image. We adopted the following threestages to perform digital forensic analysis in acomprehensive manner:

    Stage 1: Hard disk data acquisition,Stage 2: Evidence searching andStage 3: Analysis of NTFS file system.

    2.1 Stage 1 - Hard Disk Data AcquisitionAs the first stage in forensic analysis, we used

    the dcfldd developed by Nicholas Harbour and ddutility from George Garner to acquire the NTFS disk

    image from the digital electronic storage device.This utility was selected for investigation since itprovides simple and flexible acquisition tools. Themain advantage of using these tools is that we couldextract the data in or between partitions to a separatefile for more analysis. In addition, this utilityprovides built-in MD5 hashing features. Some of itssalient features allow the analyst to calculate, save,and verify the MD5 hash values. In digital forensicanalysis, using hashing technique is important toensure data integrity and to identify which values ofdata have been maliciously changed as well as toexplore known data objects [19].

    2.2 Stage 2 - Evidence searchingThe next stage involved searching for evidences

    with respect to system tampering. An evidence ofintrusion could be gained by looking for someknown signatures, timestamps as well as evensearching for hidden data [20]. In this stage, we usedthe Strings command by Mark Russinovich, Frhedhexeditor tool by Rihan Kibria and WinHexhexeditor tool by X-Ways Software Technology AG

    to detect a keyword or phrase from the disk image.

    2.3 Stage 3 - Analysis of NTFS File SystemIn the final stage of the experimental study, we

    analyzed the data obtained from the NTFS diskimage that contributed towards meaningfulconclusions of the forensic investigation. Weadopted a collection of tools such as the Sleuth Kit(TSK), Autopsy Forensic by Brian Carrier andNTFSINFO v1.0 from Microsoft Sysinternals byMark Russinovich to perform different aspects of theNTFS file system analysis.

    3 FORENSIC INVESTIGATION STEPS

    Many aspects must be taken into considerationwhen conducting a computer forensic investigation.There are different approaches adopted by aninvestigator while examining a crime scene. Fromthe literature, we find five steps adopted, such as,Policy and procedure development, Evidenceassessment, Evidence acquisition, Evidenceexamination, and documenting and reporting [26]. Inour proposed approach for the digital forensicinvestigation, we adopted the following nine steps asshown in Figure 1:

    Step 1: Policy and Procedure Development In thisstep, suitable tools that are needed in the digitalscene are determined as part of administrativeconsiderations. All aspects of policy and proceduredevelopment are considered to determine the missionstatement, skills and knowledge, funding, personalrequirement, evidence handling and support frommanagement.

    Step 2: Hard Disk Acquisition This step involvesforensic duplication that could be achieved byobtaining NTFS image of the original disk using DDtool command. This step is for obtaining sector-by-sector mirror image of the disk and the output of theimage file is created as Image.dd.

    Step 3: Check the Data Integrity This step ensuresthe integrity of data acquired through reporting of ahash function. We used MD5 tool to guarantee theintegrity of the original media and the resultingimage file.

    Step 4: Extract MFT in the Boot Sector In this step,the MFT is extracted from the boot sector. We

    analyzed the MFT using WinHex hexeditor tool andchecked number of sectors allocated to the NTFS filesystem using NTFSINO.

    Step 5: Extract $Boot file and Backup boot sector In this step, the $Boot file is extracted to investigatehidden data. We analyzed the hidden data in the$Boot metadata file system using WinHex, TSK andAutopsy tools.

    Step 6: Compare Boot sector and Backup A

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 552

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    20/82

    comparison of the original and backup boot sectors isperformed in this step. We obtained another 2Images from the original Image using DD tool. Theoutput generated resulted in two image files named,backupbootsector.dd and bootsector.dd. We analyzedthe two image file named backupbootsector.dd andbootsector.dd using WinHex hex-editor tool, TSKand Autopsy tools.

    Step 7: Check the Data Integrity In this step theintegrity of data is verified again for test ofcongruence. We adopted the hashing techniqueusing MD5 tool for the two created image files tocheck the data integrity.

    Step 8: Extract the ASCII and UNICODE This stepinvolves extracting theASCII and UNICODE characters from the binaryfiles in the disk image. We used the Stringscommand tool and keyword search for matching textor hexadecimal values recorded on the disk. Throughkeyword search, we could find even files thatcontain specific words.

    Step 9: Physical Presentation In this final step, allthe findings from the forensic investigation aredocumented. It involves presenting the digitalevidence through documentation and reportingprocedures.

    Figure 1: Forensic investigation steps

    4 BOOT SECTOR ANALYSIS OF NTFS

    4.1 NTFS Disk ImageAs mentioned in the previous section, the first

    step to be adopted by a digital forensic investigator isto acquire a duplicate copy of the NTFS disk imagebefore beginning the analysis. This is to ensure thatthe data on the original devices have not beenchanged during the analysis. Therefore, it is requiredto isolate the original infected computer from thedisk image in order to extract the evidence that couldbe found on the electronic storage devices. Byconducting investigations on the disk image, wecould unearth any hidden intrusions since the imagecaptures the invisible information as well [21]. Theadvantages of analyzing disk images are that theinvestigators can: a) preserve the digital crime-scene,b) obtain the information in slack space, c) accessunallocated space, free space, and used space, d)recover file fragments, hidden or deleted files anddirectories, e) view the partition structure and f) getdate-stamp and ownership of files and folders [3, 22].

    4.2 Master File TableTo investigate how intrusions result in data

    hiding, data deletion and other obfuscations, it isessential to understand the physical characteristics ofthe Microsoft NTFS file system. Master File Table(MFT) is the core of NTFS since it contains detailsof every file and folder on the volume and allocatestwo sectors for every MFT entry [23]. Hence, a goodknowledge of the MFT layout structure alsofacilitates the disk recovery process. Each MFT entryhas a fixed size which is 1 KB (at byte offset 64 inthe boot sector one could identify the MFT recordsize). We provide the MFT layout and represent the

    plan of the NTFS file system using Figure 2. Themain purpose of NTFS is to facilitate reading andwriting of the file attributes and the MFT enables aforensic analyst to examine in some detail thestructure and working of the NTFS volume.Therefore, its important to understand how theattributes are stored in the MFT entry.

    The key feature to note is that MFT entry withinthe MFT contains attributes that can have any formatand any size. Further, as shown in Figure 2, everyattribute contains an entry header which is allocatedin the first 42 bytes of a file record, and it contains anattribute header and attribute content. The attributeheader is used to identify the size, name and the flag

    value. The attribute content can reside in the MFTfollowed by the attribute header if the size is lessthan 700 bytes (known as a resident attribute),otherwise it will store the attribute content in anexternal cluster called cluster run (known as a non-resident attribute). This is because; the MFT entry is1KB in size and hence cannot fit anything thatoccupies more than 700 bytes.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 553

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    21/82

    Figure 2: MFT layout structure

    4.3 Boot Sector Analysis and ResultsWe performed boot sector analysis by

    investigating metadata files that are used to describethe file system. We followed the steps described inprevious section (Figure 1) by first creating a NTFS

    disk image of the test computer using the dd utilityfor investigating the boot sector. We usedNTFSINFO tool on the disk image as shown in Table1 which shows the boot sector of the test device andinformation about the on-disk structure. Such datastructure examination enables us to view the MFTinformation, allocation size, volume size andmetadata files. We extracted useful information suchas the size of clusters, sector numbers in the filesystem, starting cluster address of the MFT, the sizeof each MFT entry and the serial number given forthe file system.

    Table 1: NTFS Information Details.

    Volume Size-----------Volume size : 483 MBTotal sectors : 991199Total clusters : 123899Free clusters : 106696Free space : 416 MB (86% of drive)Allocation Size----------------Bytes per sector : 512Bytes per cluster : 4096Bytes per MFT record : 1024Clusters per MFT record: 0MFT Information---------------MFT size : 0 MB (0% of drive)MFT start cluster : 41300MFT zone clusters : 41344 - 56800MFT zone size : 60 MB (12% of drive)MFT mirror start : 61949Meta-Data files

    From the information gained above, we followed

    the steps in Figure 1 to analyze the boot sector image.As shown in Figure 3, we performed an analysis ofthe data structure of this boot sector and the resultsof the investigation conducted using existing forensictools is summarized in Table 2. From these results,we could conclude that the existing forensic tools donot check possible infections that could take place incertain hidden data of the boot sector. Hence, wedescribe the hidden data analysis technique that wehad adopted in the next section.

    5 HIDDEN DATA ANALYSIS AND RESULTS

    The recent cyber crime trends are to use differentobfuscated techniques such as disguising file names,hiding attributes and deleting files to intrude thecomputer system. Since the Windows operatingsystem does not zero the slack space, it becomes avehicle to hide data, especially in $Boot file. Hence,in this study, we have analyzed the hidden data in the$Boot file structure. The $Boot entry is stored in ametadata file at the first cluster in sector 0 of the filesystem, called $Boot, from where the system boots.It is the only metadata file that has a static locationso that it cannot be relocated. Microsoft allocates thefirst 16 sectors of the file system to $Boot and onlyhalf of these sectors contains non-zero values [3].

    In order to investigate the NTFS file system, onerequires to possess substantial knowledge andexperience to analyze the data structure and thehidden data [24]. The $Boot metadata file structure islocated in MFT entry 7 and contains the boot sectorof the file system. It contains information about thesize of the volume, clusters and the MFT. The $Bootmetadata file structure has four attributes, namely,$STANDARD_INFORMATION, $FILE_NAME,

    $SECURITY_DESCRIPTION and $DATA. The$STANDARD_INFORMATION attribute containstemporal information such as flags, owner, securityID and the last accessed, written, and created times.The $FILE_NAME attribute contains the file namein UNICODE, the size and temporal information aswell. The $SECURITY_DESCRIPTION attributecontains information about the access control andsecurity properties. Finally, the $DATA attributecontains the file contents. These attributes values forthe test sample are shown in Table 2 as anillustration. To achieve this, we used the followingTSK command tools:

    Istat f ntfs c:\image.dd 7

    From our investigations of the resulting attributevalues, we find that, the $Boot data structure of theNTFS file system could be used to hide data. Byanalyzing the hidden data in the boot sector, onecould provide useful information for digital forensics.The size of the data that could be hidden in the bootsector is limited by the number of non-zero that

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 554

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    22/82

    Microsoft allocated in the first 16 sectors of the filesystem. The data could be hidden in the $Bootmetadata files without raising suspicion and withoutaffecting the functionality of the system [25].

    Table 2: Results of $Boot Analysis

    MFT Entry Header Values:Entry: 7 Sequence: 7$LogFile Sequence Number: 0Allocated FileLinks: 1

    $STANDARD_INFORMATION Attribute Values:Flags: Hidden, SystemOwner ID: 0Created: Mon Feb 09 12:09:06 2009File Modified: Mon Feb 09 12:09:06 2009MFT Modified: Mon Feb 09 12:09:06 2009Accessed: Mon Feb 09 12:09:06 2009

    $FILE_NAME Attribute Values:Flags: Hidden, SystemName: $BootParent MFT Entry: 5 Sequence: 5Allocated Size: 8192 Actual Size: 8192Created: Mon Feb 09 12:09:06 2009File Modified: Mon Feb 09 12:09:06 2009MFT Modified: Mon Feb 09 12:09:06 2009Accessed: Mon Feb 09 12:09:06 2009

    Attributes:Type: $STANDARD_INFORMATION (16-0)Name: N/A Resident size: 48Type: $FILE_NAME (48-2) Name: N/A Residentsize: 76Type: $SECURITY_DESCRIPTOR (80-3)Name: N/A Resident size: 116Type: $DATA (128-1) Name: $Data Non-Resident size: 81920 1

    Analysis of the $Boot data structure of the NTFSfile system will identify any hidden data. Theanalyzer should start by making a comparisonbetween the boot sector and the backup boot sector.The image with the boot sector and backup bootsector are supposed to be identical; otherwise there issome data hidden in the $Boot data structure. Onemethod is to check the integrity of the backup bootsector and the boot sector by calculating the MD5 forboth of them. A difference in checksum indicatesthat there is some hidden data. We performed thiscomparison by adopting the following commands onthe $Boot image file and the backup boot image:

    dd if=image.dd bs=512 count=1 skip=61949of=c:\backupbootsector.dd md5sum verifymd5

    md5out=c:\hash1.md5

    dd if=image.dd bs=512 count=1 of=c:\bootsector.ddmd5sum verifymd5 md5out=c:\hash2.md5

    We found that hidden data in the $Boot datastructure could not be detected directly by theexisting tools used in this study and manualinspections were required alongside these forensictools. Hence, through the analysis conducted withvarious existing utilities and tools, we arrived at thefollowing results:

    i) Since NTFS stores all events that take placeon a computer system, there is a huge amountof data analysis required while scanning theentire NTFS disk image for forensic purposes.In this empirical study, by merely focusingon the hidden data of the $Boot file, we haveshown that a variety of tools and utilities hadto be adopted along with manual inspections.Hence, it takes an enormous amount of timeto analyze the data derived with such tools.

    ii) The existing forensic tools are notcomprehensive and effective in identifyingthe recent computer threats. Not all computerinfections are detected by forensic tools,especially intrusions that are in the form ofhidden data in the $Boot file go unchecked.

    iii) It was mandatory to perform manualinvestigations alongside the existing tools. Byadopting a manual introspection of the $Bootfile using the three-stage approach of i) harddisk acquisition, ii) evidence searching andiii) analysis of the NTFS file system, we

    could successfully identify hidden data in the$Boot file.

    iv) Intelligent search techniques could be adoptedto extract the ASCII and UNICODEcharacters from binary files in the disk imageon either the full file system image or just theunallocated space, which could speed-up theprocess of identifying hidden data.

    v) One of the main reasons for having varyingtools is that Microsoft has different versionsof the NTFS file system to be catered for.While Windows XP and Windows Server2003 use the same NTFS version, Windows

    Vista uses the NTFS 3.1 version [7]. The newNTFS 3.1 has changed the on-disk structure.For example, the location of the volume bootrecord is at physical sector 2,048. Most of theexisting tools do not work with all thedifferent versions of NTFS file system, andhence a comprehensive tool is warranted tocope with these changes.

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 555

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    23/82

    Figure 3: Analysis of the test boot Sector

    Table 2: Results from the analysis of the test boot sector.

    Byte

    RangeSize Description Value Action / Result

    0 -- 2 3 Jump to boot code 9458411If bootable, jump. If non-bootable,used to store error message

    3 -- 10 8 OEM Name System ID NTFS

    11 -- 12 2 Bytes per sector: 512

    13 -- 13 1 Sectors per cluster 8

    14 -- 15 2 Reserved sectors 0 Unused Possible Infection16 -- 20 5 Unused 0 Unused Possible Infection

    21 -- 21 1 Media descriptor 0

    22 -- 23 2 Unused 0 Unused Possible Infection

    24 -- 25 2 Sectors per track 63 No Check Possible Infection

    26 -- 27 2 Number of heads 255 No Check Possible Infection

    28 -- 31 4 Unused 32 No Check Possible Infection

    32 -- 35 4 Unused 0 Unused Possible Infection

    36 -- 39 4 Drive type check 80 00 00 00 For USB thumb drive

    40 -- 47 8Number of sectors in filesystem (volume)

    0.47264 GB

    48 -- 55 8Starting cluster address of$MFT

    4*8=32

    56 -- 63 8

    Starting cluster address of MFT

    Mirror $DATA attribute 619,4964 -- 64 1 Size of record - MFT entry 210=1024

    65 -- 67 3 Unused 0 Unused Possible Infection

    68 -- 68 1 Size of index record 01h

    69 -- 71 3 Unused 0 Unused Possible Infection

    72 -- 79 8 Serial number C87C8h

    80 -- 83 4 Unused 0 Unused Possible Infection

    84 -- 509 426 Boot code ~

    510 --511 2 Boot signature 0xAA55

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 556

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    24/82

    6 CONCLUSIONS AND FUTURERESEARCH DIRECTIONS

    Recent methods adopted by computer intruders,attackers and malwares are to target hidden anddeleted data so that they could evade from virusscanners and become even difficult to be identifiedusing existing digital forensic tools. This paper hasattempted to explore the difficulties involved indigital forensics, especially in conducting NTFSdisk image analysis and to propose an effectivedigital forensic analysis.

    In this empirical study, we have found that theboot sector of the NTFS file system could be usedas a vehicle to hide data by computer attackers asthere is a potential weakness. We have emphasizedthe knowledge and importance of file systems fordigital forensics, as several techniques to hide datasuch as slack space and hidden attributes are beingrecently adopted by attackers. This is an importantNTFS file system weakness to be addressed andresearch in this domain area could lead to effectivesolution for the open problem of detecting newmalicious codes that make use of such anobfuscated mode of attack. We have shown thatthe existing forensic software tools are notcompetent enough to comprehensively detect allhidden data in boot sectors.

    As a first step to address this problem, we haveproposed a three-stage forensic analysis processconsisting of nine steps to facilitate theexperimental study. We have reported the resultsgathered by following these proposed steps. Byadopting effective search techniques, we weresuccessful in identifying some unknown malicioushidden data in the $Boot file that were undetected

    by current forensic tools.In this pilot study we had adopted a few

    forensic techniques and effective manualinspections of the NTFS file image. Our futureresearch directions would be to automate theproposed process so as to facilitate forensic analysisof the NTFS disk image in an efficient andcomprehensive manner. We plan to extract andextrapolate malware signatures effectively as wellas intelligently for any existing and even newmalware that use hidden and obfuscated modes ofattack. We would automate the knowledge of howto extract data from hidden data structures and howto reclaim deleted data and we believe this would

    extensively benefit the digital evidence collectionand recovery process.

    7 REFERENCES

    [1] M. Reith, C. Carr, & G. Gunsch: Anexamination of digital forensic models,International Journal of Digital Evidence, 1,pp. 1-12 (2002).

    [2] M. Alazab, S. Venkatraman & P. Watters:

    Digital forensic techniques for static analysisof NTFS images, Proceedings of ICIT2009,Fourth International Conference onInformation Technology, IEEE Xplore (2009).

    [3] B. Carrier: File system forensic analysis,Addison-Wesley Professional, USA, (2008).

    [4] S. Ardisson: Producing a Forensic Image ofYour Clients Hard Drive? What You Need toKnow, Qubit, 1, pp. 1-2 (2007).

    [5] M. Andrew: Defining a Process Model forForensic Analysis of Digital Devices andStorage Media, Proceedings of SADFE2007,Second International Workshop on SystematicApproaches to Digital Forensic Engineering,pp. 16-30 (2007).

    [6] E Investigation: Electronic Crime SceneInvestigation: A Guide for First Responders,US Department of Justice, NCJ, (2001).

    [7] Svensson, A., Computer Forensic Applied toWindows NTFS Computers, Stockholm'sUniversity, Royal Institute of Technology,(2005).

    [8] NTFS, http://www.ntfs.com, 22/2/2009.[9] D. Purcell & S. Lang: Forensic Artifacts of

    Microsoft Windows Vista System, LectureNotes in Computer Science, Springer, 5075,pp. 304-319 (2008).

    [10] T. Newsham, C. Palmer, A; Stamos & J.Burns: Breaking forensics software:Weaknesses in critical evidence collection,Proceedings of the 2007 Black HatConference, (2007).

    [11] DD tool, George Garners site, RetrievedJanuary, 2009 fromhttp://users.erols.com/gmgarner/forensics/.

    [12] DCFL tool, Nicholas Harbour,

    http://dcfldd.sourceforge.net/, accessed on14/1/2009.

    [13] WinHex tool, X-Ways Software TechnologyAG, Retrieved January, 2009 fromhttp://www.x-ways.net/winhex/.

    [14] FRHED tool, Raihan Kibria site,http://frhed.sourceforge.net/, 14/1/2009.

    [15] STRINGS, Mark Russinovich, RetrievedJanuary, 2009 fromhttp://technet.microsoft.com/en-us/sysinternals/bb897439.aspx.

    [16] TSK tools, Brian Carrier site,http://www.sleuthkit.org/sleuthkit/, 14/1/2009.

    [17] Autopsy tools, Brian Carrier site, Retrieved

    January, 2009 fromhttp://www.sleuthkit.org/autopsy/.

    [18] NTFSINFO tool, Mark Russinovich,Retrieved January, 2009 fromhttp://technet.microsoft.com/en-au/sysinternals/bb897424.aspx.

    [19] V. Roussev, Y.Chen, T. Bourg & G. Richard:Forensic file system hashing revisited, DigitalInvestigation, Elsevier, 3, pp. 82-90 (2006).

    [20] K. Chow, F. Law, M. Kwan & K. Lai: The

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 557

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    25/82

    Rules of Time on NTFS File System,Proceedings of the Second InternationalWorkshop on Systematic Approaches toDigital Forensic Engineering, pp. 71-85(2007).

    [21] K.; Jones, R. Bejtlich & C. Rose: Real digitalforensics: computer security and incidentresponse, Addison-Wesley Professional, USA,(2008).

    [22] H. Carvey: Windows Forensic Analysis DVDToolkit, Syngress Press, USA, (2007).

    [23] L. Naiqi, W. Yujie & H. QinKe: ComputerForensics Research and Implementation Basedon NTFS File System, CCCM'08, ISECSInternational Colloquium on Computing,Communication, Control, and Management,(2008).

    [24] J. Aquilina, E. Casey & C. Malin: MalwareForensics Investigating and AnalyzingMalicious Code, Syngress Publishing,USA,(2008).

    [25] E. Huebner, D. Bem & C., Wee: Data hidingin the NTFS file system, DigitalInvestigation, Elsevier, (2006), 3, 211-226.

    [26] S. Hart, J. Ashcroft & D. Daniels:Forensic examination of digital evidence: aguide for law enforcement,National Institute of Justice NIJ-US,Washington DC, USA, Tech. Rep. NCJ,(2004).

    Special Issue on ICIT 2009 Conference - Applied Computing

    UbiCC Journal Volume 4 No. 3 558

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    26/82

    JOB AND APPLICATION-LEVEL SCHEDULING IN DISTRIBUTED

    COMPUTING

    Victor V. Toporkov

    Computer Science Department, Moscow Power Engineering Institute,ul. Krasnokazarmennaya 14, Moscow, 111250 Russia

    [email protected]

    ABSTRACT

    This paper presents an integrated approach for scheduling in distributed computing

    with strategies as sets of job supporting schedules generated by a critical works

    method. The strategies are implemented using a combination of job-flow and

    application-level techniques of scheduling within virtual organizations of Grid.

    Applications are regarded as compound jobs with a complex structure containing

    several tasks co-allocated to processor nodes. The choice of the specific schedule

    depends on the load level of the resource dynamics and is formed as a resource

    request, which is sent to a local batch-job management system. We proposescheduling framework and compare diverse types of scheduling strategies using

    simulation studies.

    Keywords: distributed computing, scheduling, application level, job flow,

    metascheduler, strategy, supporting schedules, task, critical work.

    1 INTRODUCTION

    The fact that a distributed computational

    environment is heterogeneous and dynamic along

    with the autonomy of processor nodes makes it much

    more difficult to manage and assign resources for job

    execution at the required quality level [1].

    When constructing a computing environment

    based on the available resources, e.g. in the model

    which is used in X-Com system [2], one normally

    does not create a set of rules for resource allocation

    as opposed to constructing clusters or Grid-based

    virtual organizations. This reminds of some

    techniques, implemented in Condor project [3, 4].

    Non-clustered Grid resource computingenvironments are using similar approach. For

    example, @Home projects which are based on

    BOINC system realize cycle stealing, i.e. either idle

    computers or idle cycles of a specific computer.

    Another still similar approach is related to the

    management of distributed computing based on

    resource broker assignment [5-11]. Besides Condor

    project [3, 4], one can also mention several

    application-level scheduling projects: AppLeS [6],

    APST [7], Legion [8], DRM [9], Condor-G [10], and

    Nimrod/G [11].

    It is known, that scheduling jobs with

    independent brokers, or application-level scheduling,

    allows adapting resource usage and optimizing a

    schedule for the specific job, for example, decreasing

    its completion time. Such approaches are important,

    because they take into account details of job structure

    and users resource load preferences [5]. However,

    when independent users apply totally different

    criteria for application optimization along with job-

    flow competition, it can degrade resource usage and

    integral performance, e.g. system throughput,

    processor nodes load balance, and job completion

    time.

    Alternative way of scheduling in distributed

    computing based on virtual organizations includes a

    set of specific rules for resource use and assignment

    that regulates mutual relations between users andresource owners [1]. In this case only job-flow level

    scheduling and allocation efficiency can be

    increased. Grid-dispatchers [12] or metaschedulers

    are acting as managing centres like in the GrADS

    project [13]. However, joint computing nature of

    virtual organizations creates a number of serious

    challenges. Under such conditions, when different

    applications are not isolated, it is difficult to achieve

    desirable resource performance: execution of the

    users processes can cause unpredictable impact on

    other neighbouring processes execution time.

    Therefore, there are researches that pay attention to

    the creation of virtual machine based virtual Grid

    Special Issue on ICIT 2009 Conference - Applied Computing

  • 8/14/2019 Special Issue on ICIT 2009 Conference Applied Computing - UbiCC Journal - UbiCC Journal, www.ubicc.org, Ubiqu

    27/82

    workspaces by means of specialized operating

    systems, e.g., in the new European project XtreemOS

    (http://www.xtreemos.eu ).

    Inseparability of the resources makes it much

    more complicated to manage jobs in a virtual

    organization, because the presence of local job-flows

    launched by owners of processor nodes should betaken into account. Dynamical load balance of

    different job-flows can be based on economical

    principles [14] that support fairshare division model

    for users and owners. Actual job-flows presence

    requires forecasting resource state and their

    reservation [15], for example by means of Maui

    cluster scheduler simulation approach or methods,

    implemented in systems such as GARA, Ursala, and

    Silver [16].

    The above-mentioned works are related to either

    job-flow scheduling problems or application-level

    scheduling.

    Fundamental difference between them and theapproach described is that the resultant dispatching

    strategies are based on the integration of job-flows

    management methods and compound job scheduling

    methods on processor nodes. It allows increasing the

    quality of service for the jobs and distributed

    environment resource usage efficiency.

    It is considered, that the job can be compound

    (multiprocessor) and the tasks, included in the job,

    are heterogeneous in terms of computation volume

    and resource need. In order to complete the job, one

    should co-allocate the tasks to different nodes. Each

    task is executed on a single node and it is supposed,that the local management system interprets it as a

    job accompanied by a resource request.

    On one hand, the structure of the job is usually

    not taken into account. The rare exception is the

    Maui cluster scheduler [16], which allows for a

    single job to contain several parallel, but

    homogeneous (in terms of resource requirements)

    tasks. On the other hand, there are several resource-

    query languages. Thus, JDL from WLMS

    (http://edms.cern.ch) defines alternatives andpreferences when making resource query, ClassAds

    extensions in Condor-G [10] allows forming

    resource-queries for dependant jobs. The executionof compound jobs is also supported by WLMS

    scheduling system of gLite platform

    (http://www.glite.org), though the resource

    requirements of specific components are not taken

    into account.

    What sets our work apart from other scheduling

    research is that we consider coordinated application-

    level and job-flow management as a fundamental part

    of the effective scheduling strategy within the virtual

    organization.

    Environment state of distribution, dynamics of its

    configuration, users and owners preferences cause

    the need of building multifactor and multicriteria job

    managing strategies [17-20]. Availability of

    heterogeneous resources, data replication policies

    [12, 21, 22] and multiprocessor job structure for

    efficient co-allocation between several processor

    nodes should be taken into account.

    In this work, the multicriteria strategy is regarded

    as a set of supporting schedules in order to coverpossible events related to resource availability.

    The outline of the paper is as follows.

    In section 2, we provide details of application-

    level and job-flow scheduling with a critical works

    method and strategies as sets of possible supporting

    schedules.

    Section 3 presents a framework for integrated

    job-flow and application-level scheduling.

    Simulation studies of coordinated scheduling

    techniques and results are discussed in Section 4.

    We conclude and point to future directions in

    Section 5.

    2 APPLICATION-LEVEL AND JOB-FLOW

    SCHEDULING STRATEGIES

    2.1 Application-Level Scheduling Strategy

    The application-level scheduling strategy is a set

    of possible resource allocation and supporting

    schedules (distributions) for all N tasks in the job

    [18]:

    Distribution:=< , ,

    >,

    where Allocation i, j is the processor node i,

    jforTask 1, N; Start 1,N, End 1, Nrun

    time and stop time forTask 1, N execution.

    Time interval [Start, End] is treated as so

    called walltime (WT), defined at the resource

    reservation time [15] in the local batch-job

    management system.

    Figure 1 shows some examples of job graphs in

    strategies with different degrees of distribution, task

    details, a