6
Improvements over Two Phase Shortest Path Algorithm Muhammad Aasim Qureshi, Mohd Fadzil Hassan Computer and Information Science Department Universiti Technologi PETRONAS Perak, Malaysia [email protected], mfadzil [email protected], Abstract: Shortest path is an old classical problem in the area of theoretical computer science. This work is the continuation of the work done previously by the authors and presents two possible improvements in the existing algorithm. The introduced improvements which are simple and fit in easily into the existing algorithm prove to improve the complexity level. The need of improvement is discussed in detail and the expected improvement in overall processing time is shown with the example. Keords: eoretical Computer Science, Graph eo, Algorithm, Shortest Path Problem. 1. INTRODUCTION Shortest Path Problem can formally be defined as follows: Let G be a graph such that G = (y, E), where V = { V], V 2 , V3, . . . , vn} and E = { e], e 2 , e3, . . . , em} such that IVI = n and lEI = m. G is an undirected weighted connected graph having no negative weight edge, with pre-specified source vertex's' and destination vertex '1' such that s E V and d E V. We have to find simple path om s to t with minimum most total edge weight. Shortest Path Problem is one of the classic problems in algorithmic graph theory. Its innate nature and wide range of applications, plays an important role in solving many optimization problems especially in networks where its efficient algorithms becomes important part of the actual problem. As a matter of fact lots of problems can be molded as shortest path problems. In fact, the shortest path problem is extensively applied in communication, computer systems, transportation networks and many other practical problems [1]. Shortest path algorithms also support the claim of the shortest processes for development [17] [18] [21]. Its solution led to an efficient management of control flow in workflow processes [25] [26] [27] The single-source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory which is one of the hard areas of theoretical computer science [19] [20]. Since 1959, all theoretical developments in SSSP for general directed and undirected graphs have been based on Dijkstra's algorithm, visiting the vertices in order of increasing distance om s. As a matter of fact many real life problems can be represented as SSSP. As such, SSSP has been extensively applied in communication, computer systems, transportation networks and many other practical problems [1]. 978-1-4244-6716-7/10/$26.00 ©2010 EE The complexity of Dijkstra's algorithm [10] has been determined as O(n 2 + m) if linear search is used to calculate the minimum [2]. A new heap data structure was introduced by [4] [5] to calculate the minimum which resulted the complexity to be improved to O(m log n). The complexity was rther improved in [9] when Fredman and Taljan developed Fibonaccii heap. The work in [9] was optimal implementation of Dijkstra's algorithm in contrast to the common implementation approach of Dijkstra's algorithm which visits the vertices in sorted order. Using sion trees of [8], we get an O(m (log n) ) randomized bound. Their later atomic heaps give an O(m + n log nllog log n) bound as presented in [7]. Aſterwards, in [11][12][16] priority queues were introduced which gave an O(m log log n) bound and an O(m + n(log n 1+E ) ) bound. These bounds are randomized assuming that we want linear space. Aſterwards [14] reduced it to O(m + n(log n log log n) ) and it was rther improved improved it with randomized bound to O(m + n(log n 1+E ) 11 3 ) by [15]. Priority queue presented in [6] for SSSP improved the shortest path cost giving a running time of O(m + n(log C) ) where C was the cost of the heaviest edge. The work by [131 reduced the complexity to O(m + n (3 log C log log C) 1 ) and [15] presented a rther improvement to O(m + n(log C) 1/4+£ ). The work reported in [3] presented an algorithm and claimed that it will out class Dijkstra's algorithm. 2. INTRODUCTION TO TWO-PHASE ALGORITHM This research is basically the continuation of the work presented in [24] [22] [23]. Basic idea of the algorithm is the classification and identification of the nodes that have high probability of causing probleerror during a Breadth First Search (BFS) like traversal. Processing through such nodes is delayed until all possible updates are done on them. 3. Two PHASE ALGORITHM The algorithm is divided in two phases. Both are almost similar in nature but the purpose is somewhat different. Phase I provides initialization of data structure for Phase II. Along with identification and marking of levels of the nodes this phase also identifies initial level CULPRIT nodes (a node is said to be CULPRIT if it is updated by its successor-at least two levels ahead). This is necessary as it provides a base for next phase. Phase II starts with the initializations provided by phase I and continues with the nodes level by level skipping and 683

[IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

Improvements over Two Phase Shortest Path Algorithm

Muhammad Aasim Qureshi, Mohd Fadzil Hassan

Computer and Information Science Department Universiti Technologi PETRONAS

Perak, Malaysia [email protected], mfadzil [email protected],

Abstract: Shortest path is an old classical problem in the area of theoretical computer science. This work is the continuation of the work done previously by the authors and presents two possible improvements in the existing algorithm. The introduced improvements which are simple and fit in easily into the existing algorithm prove to improve the complexity level. The need of improvement is discussed in detail and the expected improvement in overall processing time is shown with the example.

Keywords: Theoretical Computer Science, Graph Theory, Algorithm, Shortest Path Problem.

1. INTRODUCTION

Shortest Path Problem can formally be defined as follows:

Let G be a graph such that G = (y, E), where V = { V], V2, V3, . . . , vn} and E = { e], e2, e3, . . . , em} such that IVI =

n and lEI = m. G is an undirected weighted connected graph having no negative weight edge, with pre-specified source vertex's' and destination vertex '1' such that s E V and d E V. We have to find simple path from s to t with minimum most total edge weight.

Shortest Path Problem is one of the classic problems in algorithmic graph theory. Its innate nature and wide range of applications, plays an important role in solving many optimization problems especially in networks where its efficient algorithms becomes important part of the actual problem. As a matter of fact lots of problems can be molded as shortest path problems. In fact, the shortest path problem is extensively applied in communication, computer systems, transportation networks and many other practical problems [1]. Shortest path algorithms also support the claim of the shortest processes for development [17] [18] [21]. Its solution led to an efficient management of control flow in workflow processes [25] [26] [27]

The single-source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory which is one of the hard areas of theoretical computer science [19] [20]. Since 1959, all theoretical developments in SSSP for general directed and undirected graphs have been based on Dijkstra's algorithm, visiting the vertices in order of increasing distance from s. As a matter of fact many real life problems can be represented as SSSP. As such, SSSP has been extensively applied in communication, computer systems, transportation networks and many other practical problems [1].

978-1-4244-6716-7/10/$26.00 ©2010 IEEE

The complexity of Dijkstra's algorithm [10] has been determined as O(n2 + m) if linear search is used to calculate the minimum [2]. A new heap data structure was introduced by [4] [5] to calculate the minimum which resulted the complexity to be improved to O(m log n). The complexity was further improved in [9] when Fredman and Taljan developed Fibonaccii heap. The work in [9] was an optimal implementation of Dijkstra's algorithm in contrast to the common implementation approach of Dijkstra's algorithm which visits the vertices in sorted order. Using fusion trees of [8] , we get an O(m (log n) y, ) randomized bound. Their later atomic heaps give an O(m + n log nllog log n) bound as presented in [7]. Afterwards, in [11] [12][16] priority queues were introduced which gave an O(m log log n) bound and an O(m + n(log n1+E)Y,) bound. These bounds are randomized assuming that we want linear space. Afterwards [14] reduced it to O(m + n(log n log log n) y,) and it was further improved improved it with randomized bound to O(m + n(log n 1+E) 113) by [15].

Priority queue presented in [6] for SSSP improved the shortest path cost giving a running time of O(m + n(log C) y,) where C was the cost of the heaviest edge. The work by [131 reduced the complexity to O(m + n (3 log C log log C)1 ) and [15] presented a further improvement to O(m + n(log C) 1/4+£). The work reported in [3] presented an algorithm and claimed that it will out class Dijkstra's algorithm.

2. INTRODUCTION TO TWO-PHASE ALGORITHM

This research is basically the continuation of the work presented in [24] [22] [23]. Basic idea of the algorithm is the classification and identification of the nodes that have high probability of causing problem/error during a Breadth First Search (BFS) like traversal. Processing through such nodes is delayed until all possible updates are done on them.

3. Two PHASE ALGORITHM

The algorithm is divided in two phases. Both are almost similar in nature but the purpose is somewhat different. Phase I provides initialization of data structure for Phase II. Along with identification and marking of levels of the nodes this phase also identifies initial level CULPRIT nodes (a node is said to be CULPRIT if it is updated by its successor-at least two levels ahead). This is necessary as it provides a base for next phase. Phase II starts with the initializations provided by phase I and continues with the nodes level by level skipping and

683

Page 2: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

identifying CULPRIT nodes. Figure 1 provides a brief example of the working of this algorithm.

To start with, all nodes except source node are colored WHITE, designated with no parent, given distance value as 00, marked with Normal type and level '-1 '. While source is colored Gray, designated with no parent, given distance value as 0, marked with Normal type and level 0. This algorithm proceeds in BFS fashion (with few violations and additions) - level by level exploring the adjacent nodes. Starting with source node, all its adjacent nodes are explored one by one updating the information at the node that is being explored. When a node explores one of its adjacent nodes, adjacent node's color is updated to GRAY, it is marked with level (level of exploring node + l), and min(Distance of exploring node+edge weight, old distance) is marked and if distance changes then exploring node will become parent of the node that is being explored. If this node updates some node that is at least two levels up then it is marked CULPRIT. The same processing is repeated for all adjacent nodes and as soon as all neighbors are explored, the color of the node (node that was exploring) is marked BLACK. This process continues level by level until all nodes are marked black except destination node.

Phase II uses the results of Phase I as its initialization except all nodes are repainted WHITE and all CULPRIT nodes are painted GRAY. After initialization, lowest Phase

II level Culprit node is picked and all its adjacent nodes are explored as per rules explained in Phase I. After exploring all its adjacent nodes it is colored BLACK. Processing continues level by level meanwhile if some BLACK node's distance is updated and the difference between old level and new level is of 2 or more then the node is marked as CULPRIT and is colored GRAY again. And in this way the algorithm proceed until all nodes are colored BLACK (except destination).

4. SCOPE FOR IMPROVEMENTS

The complexity that is discussed in [26] is O(k(IVI+IEI) ) where k < log D. The value of k makes it quite expensive and needs to be reduced. This paper targets the improvement in Two-Phase Algorithm by attempting the reduction in the value of k. In order to attempt for improvement it is very important to know the reasons behind the high value of k.

This algorithm proceeds in BFS fashion specially phase I is totally BFS so its complexity is O(V+E). But in Phase II the processing restarts as soon as a new CULPRIT node is found and previously processed nodes of the sub-tree rooted at CULPRIT node needs to be reprocessed. So the value of k depends upon the number of CULPRIT nodes that are found. It leads to the fact that if there is less CULPRIT nodes i.e. smaller value of k, then total complexity will be better.

684

Page 3: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

A. Why a node is marked CULPRIT

In order to reduce the number of CULPRIT nodes it is necessary to understand the causes and facts that lead to a CULPRIT node. Formally a node will be called CULPRIT node if it has following properties:

Len(Pi(s,w» >= Len(Pj(s,w) + 2

k-l L

i=O for Pi where k =f:. I, I =f:. I,

and xO =yO = s and xk=yk=w

/-1 L

i=O for PJ

and Pi and Pj are paths from s to w such that Pi=f:.Pj

and Pi = { xO ,xl ,x2, ... ,xk } and Pj = { yO ,yl ,y2, ... ,yl} With respect to the execution of the algorithm a

BLACK color that is updated by some successor node with at least two level difference is marked as CULPRIT node. It is very important to understand why a nodes is marked CULPRIT node. Suppose a node, Ow' has multiple paths from source's', if the paths are are of varying lengths then it can be a candidate of being CULPRIT node. Let we take two paths, Pi and Pj, such that Pi have at least two more edges than Pj. It shows that Pi, generally, will be having light weight edges than Pj. Given the scenario, Ow' will definitely be marked as CULPRIT node. That shows that a path, Pi, having light weight edges

�ill update node 'w'. As the difference of number of edges IS greater than 1 so ow' will be explored first through Pj and then after few levels it will again be updated through Pi resulting in its new distance to be derived and causing the prior processing through Ow' to be marked as incorrect.

As Ow' has explored its children nodes and they might have explored/updated their children nodes and all the distances calculated in these levels are on the basis of the previous distance of the Ow' that was obtained when it was explored through Pj. When Pi explores and updates Ow' with a new and optimized distance then all the distance calculations of all the nodes of sub-tree rooted at Ow' stand incorrect and need to be refreshed and updated. Analyzing the whole scenario critically, it can be find out that the root cause of the generation of the CULPRIT nodes are the distances of the nodes that are not being taken into account.

Somehow we can relate this problem with the operatin? system's process scheduling mechanism, a proper tIme need to be allocated for all the processes in which process which require shorter processing time will be given higher priorities over those that require higher processing time. The described algorithm has a limitation in this regard since no priority is given to light weight

nodes over heavy weight nodes. This causes problems and as a result we see more CULPRIT nodes.

Secondly when we restart exploring nodes through CULPRIT node, there are chances that we find more CULPRIT nodes in the predecessor nodes. In fact it should be said that the current node was basically not the CUPRIT node but was a cascading effect of some of its predecessor. This can be seen in fig 1 as F is marked as CULPRIT node, due to cascading effect of Z.

5. IMPROVEMENTS

In the light of above fmdings the following improvements has been proposed:

B. Improvement 1: Finding the right CULPRIT

As soon as a node is marked as CULPRIT we must check its old parent if it gets updated due to the cascading effect from some of its predecessors (parent, grand parent, great grand parent or so on) then the identified parent will also be marked as a CULPRIT node. The exploration (only parents) will continue until this is no longer applicable and the last identified parent will be the regarded as the true CULPRIT node.

C. Improvement 2: keeping the nodes' distance values as close as possible

This improvement suggests that we must give priority to lightweight nodes (i.e nodes having less distance showing lightweight paths) over heavyweight nodes such that no paths ever face starvation (stuck at one point and seldom get their turn). This improvement suggests of keeping track of the mean of all the GRAY nodes using the formula of

J.! =

....;..i=...:....1 ---k

Where Vi:

D[vd: k:

current moment

any GRAY node at any point distance of Vi

total number of GRAY nodes at the

First, the mean of all the GRAY nodes are calculated and then one of the nodes that have distance less than the mean is picked and processed (i.e. all its neighbors are explored). At the end of the exploration of all the adjacent

685

Page 4: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

nodes new mean is calculated and a node with a distance value less than the mean is picked and the process continues. For the implementation of this logic a dynamic doubly queue having the capability to be inserted any where along with an additional pointer i.e mean-pointer. Let's call this queue 'AvgQ' as illustrated in figure 2. All the GRAY nodes will be kept in the queue. Their mean is calculated and a node having distance close to mean is marked mean node (that may change in the next calculation).

We will initialize the algorithm with an insertion of source node, s, in the AvgQ. To start the element at the head of the AvgQ is extracted and all its adjacent nodes are explored. After exploring all adjacent nodes J.:l will be calculated and the elements that are less than p. will be inserted in the first part and the elements greater than Jl will be pushed in the 2nd part of the AvgQ. Mean pointer will be marking some node that is close to Jl value. In the successive iterations a node will be picked from the front of the queue to be processed. All its neighboring nodes are explored. New Jl is calculated when ever a new gray node is found. It is not necessary to calculate Jl from the scrap as if we do so the complexity will be very high so Jl is calculated only for the first time that is while exploring through source node and for the rest of the times it is updated accordingly. To get the new Jl with old Jl is multiplied by number of gray nodes, distance of the exploring node is subtracted and explored node's distance

is added and divide by total number of gray nodes as the following formulae:

Jlnew = «(JlOld x k) - d[vxD + d[vyD x k where

�new = new mean

J.lOld = old mean k : total number of gray nodes d[ vx] : distance of the node that is exploring its adjacents d[ vy] : distance of the node that is being explored by vx

Once the new mean has been calculated, node Vy is inserted in the queue as per rules defined in the form of if­else-if conditions as below:

if (d[vy] < d[node at front]) then insert Vy infront of first

node else if (d

then point

else if (d then

queue else

[vy] < lJ.) insert Vy in front of

being pointed by lJ. [vy] > d[node at rear])

insert Vy at the end of

the

the

insert after the average pointer of the AvgQ

This way the algorithm continues until all GRAY nodes are converted into BLACK nodes. When we

686

Page 5: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

calculate the J.l we also move mean pointer to the left or to the right on the basis of the following rules If C1J.new > l.lold )

Then move mean pointer to right Else

Move mean pointer to left This rule will incorporate the change of mean so that

it can move towards the node that is closer to mean.

As V's new parent is discovered so old entry of G is discarded 2

As V is not being updated so it is not being inserted in Q

Legend that is being used in the Table

u: node that is exploring its adjacent nodes, c10red BLACK afterwards

v: Node being explored (one of the adjacent nodes of u.

IIv: Parent of v

Dv: Distance of v

Lv: Level of v

Q: Queue to keep track of explored node

6. WORKING EXAMPLE

In order to show and strengthen the arguments provided in above discussion the two versions of the algorithm, original one and improved one, are executed on same graph and their dry runs are being shown in table 1 and table 2 respectively.

After improvement 1 there are left no chances of getting parents as CULPRIT nodes and 2nd improvement leads to very few CULPRIT nodes left behind. In this example second Phase's execution is not shown due to the

1 V is inserted in the 2nd part of the AvgQ is its distance is les than

mean

687

Page 6: [IEEE 2010 International Symposium on Information Technology (ITSim 2010) - Kuala Lumpur, Malaysia (2010.06.15-2010.06.17)] 2010 International Symposium on Information Technology -

lack of space. When it was executed it resulted in more CULPRIT nodes. Only PhaseI shows three CULPRIT nodes and on the other hand the improved algorithm resulted in no CULPRIT node hence completed the execution in O(IVI+IEI) (only this example)

7. CONCLUSION

The results of the improvements are very promising and show great reduction in processing time. The improved algorithm needs special data structure i.e AvgQ but it is very much similar to normal queues except for some special insertions that are easy to manage and modify code. The nodes are being picked in a pseudo random way while giving priority to lightweight nodes using mean of the gray nodes. The process can be more refined and complexity can be more improved with multiple levels like using deciles in place of mean i.e. nine pointers or using buckets (of bucket sort) of same length and so on. The exact value of k still needs to be calculated and prove mathematically but results and current state of work is very promising. Along with it is seen that if the graph is of hierarchical form then there appear no CULPRIT nodes or very few causing k to be 1 or very close to 1 and can be represented as 0(1). Which makes overall complexity of the algorithm almost equal to the complexity of that of BFS.

8. REFERENCES

[I] Binwu Zhang, Jianzhong Zhang, Liqun Qi :The shortest path improvement problems under Hamming distance. In Springer Science+Business Media, LLC 2006, (Published online: 20 September 2006)

[2] Mikkel Thorup :Undirected Single-Source Shortest Paths with Positive Integer Weights in Linear Time. In AT&T Labs Research, Florham Park, New Jersey, Journal of the ACM, vol. 46, No. 3, pp. 362-394 (May 1999)

[3] Seth Pettie, Vijaya Ramachandran, and Srinath Sridhar :Experimental Evaluation of a New Shortest Path Algorithm_ (Extended Abstract). In D. Mount and C. Stein (Eds.): ALENEX 2002, LNCS 2409, pp. 126-142, 2002. Springer-Verlag Berlin Heidelberg (2002)

[4] Williams, J. W. J. :Heapsort. Commun. In ACM 7, 6 (June), 347-348. (1998)

[5] John Hershberger, Subhash, and Amit Bhosle :On the Difficulty of Some Shortest Path Problems. In ACM Transactions on Algorithms, Vol. 3, No. I, Article 5 (2007)

[6] Ahuja, R. K., Melhorn, K., Orlin, J. B., and Tatjan, R. E. :Faster algorithms for the shortest path problem. J.ACM 37, 213 -223 (1990)

[7] Fredman, M. L., and Willard, D. E. :Trans-dichotomous algorithms for minimum spanning trees and shortest paths. In J. Comput. Syst. Sci. 48, 533-551. (1994)

[8] Fredman, M. L., and Willard, D. E. : Surpassing the information theoretic bound with fusion trees. J. Comput. Syst. Sci. 47, (1993)

[9] Fredman, M. L., and Willard, D. E. : Fibonacci heaps and their uses in improved network optimization algorithms. J. ACM 34, 3 (July), 596 - 615. (1987)

[10] Dijekstra, E. W. 1959. A note on two problems in connection with graphs. Nurner. Math. I, 269 -271.

[II] Therup, M. :On RAM priority queues. In Proceedings of the 7th Annual ACM-SIAM Symposium on Discrete Algorithms. ACM, New York, pp. 59 - 67 (1996)

[12] Thorup, M. :Floats, integers, and single source shortest paths. In Proceedings of the 15th Symposium on Theoretical Aspects of Computer Science. Lecture Notes on Computer Science, vol. 1373. Springer-Verlag, New York, pp. 14 -24.( 1998)

[13] Cherkassky, B. V., Goldberg, A. V., and Silverstein, C. :Buckets, heaps, lists, and monotone priority queues. In Proceedings of the 8th Annual ACM-SIAM Symposium on Discrete Algorithms. ACM, New York, pp. 83-92.( 1997)

[14] Raman, R.: Priority queues: small monotone, and trans­dichotomous. In Proceedings of the4th Annual European Symposium on Algorithms. Lecture Notes on Computer Science, vol. 1136, Springer-Verlag, New York, pp. 121-137. (1996)

[15] Raman, R.: Recent results on the single-source shortest paths problem. SICACT News 28, 81- 87.(1997)

[16] Andersson, A. Miltersen, P. B. and Thorup, M. :Fusion trees can be implemented with ACO instructions only. Theoret. Comput. Sci., 215, 337-344. (1999)

[17] Rehan Akbar, Mohd Fadzil Hassan, Sohail Safdar and Muhammad Aasim Qureshi, Client's Perspective: Realization as a New Generation Process for Software Project Development and Management, Proceedings of 2nd International Conference on Communication Software and Networks (ICCSN'IO). February 26-28, 2010. IEEE. Singapore.pp.191-195.

[18] Rehan Akbar, Mohd Fadzil Hassan, A Collaborative-Interaction Model of Software Project Development: An Extension to Agile Based Methodologies, International Symposium on Information Technology 2010. IEEE, K.L., Malaysia.

[19] Muhammad Aasim Qureshi, Onaiza Maqbool, 2007, Complexity of Teaching: Computability and Complexity In 'International COliference on Teaching and Learning 2007' organized by INTI International University College at Putrajaya, Malaysia.

[20] Muhammad Aasim Qureshi, Onaiza Maqbool, 2007, Complexity of Teaching: Computability and Complexity, INTI Journal Special Issue on Teaching and Learnning 2007

[21] Rehan Akbar, Mohd Fadzil Hassan, Limitations and Measures in Outsourcing Projects to Geographically Distributed Offshore Teams, International Symposium on Information Technology 2010. IEEE, K.L., Malaysia.

[22] Muhammad Aasim Qureshi, Mohd Fadzil Hassan, Sohail Safdar, Rehan Akbar, Rabia Sammi; 2009, An Edge-wise Linear Shortest Path Algorithm for Non-Negative Weighted Undirected Graphs, Frontiers of Iiiformation Technology (FIT 09) Pakistan, December 2009.

[23] Muhammad Aasim Qureshi, Mohd Fadzil Hassan, Sohail Safdar, Rehan Akbar; 2009, A O(IEI) time Shortest Path Algorithm for Non-Negative Weighted Undirected Graphs, International Journal on Computer Science and Iiiformation Security Volume 6 (No I) October 2009.

[24] Muhammad Aasim Qureshi, Mohd Fadzil Hassan, Sohail Safdar, Rehan Akbar; 2010, Two Phase Shortest Path Algorithm for non Negative Weighted Undirected Graphs, International Conference on Communication Software and Networks (ICCSN 2010) Feb 201O,pp 223-227, accepted

[25] Sohail Safdar, Mohd Fadzil Hassan, "Moving Towards Two Dimensional Passwords", International Symposium on Information Technology 2010, ITSIM 2010, June 2010, Malaysia (In press).

[26] Sohail Safdar, Mohd Fadzil Hassan, Muhammad Aasim Qureshi, Rehan Akbar, "Framework for Alternate Execution of workflows under threat", 2nd International Conference on Communication Software and Networks, ICCSN 2010, Feb 2010, Singapore.

[27] Sohail Safdar, Mohd Fadzil Hassan, Muhammad Aasim Qureshi, Rehan Akbar, "Biologically Inspired Execution Framework for Vulnerable Workflow Systems", International Journal of Computer Science and Information Security, Vol.6 No.1, UCSIS 2009.

688