Upload
winifred-clementine-parrish
View
223
Download
0
Tags:
Embed Size (px)
Citation preview
An Active Self-Optimizing Multiplayer Gaming
Architecture
V. Ramakrishna, Max Robinson, Kevin Eustice and Peter Reiher
Laboratory for Advanced Systems ResearchUniversity of California, Los Angeles
Fifth International Workshop on Active Middleware Services
June 25th, 2003
2
Overview
Multiplayer games suffer from various problems• Also representative of other distributed applications
Our system• An infrastructure for networked multiplayer games
• Route packets using a multicast tree
• Infrastructure capable of modifying itself on the fly and in face of changing network conditions
• Infrastructure is a middleware built using active networks; it is transparent to the application
3
Outline
Multiplayer Gaming Project Objectives System Architecture Implementation Performance Evaluation Future Work and Conclusion
4
Introduction
Networked multiplayer games• Hugely popular industry
• Support increasing number of players• DOOM (1993) – LAN-based game, used IPX
• Quake (1996) – first TCP/IP based game, scales more than DOOM
• Massively multiplayer games• EverQuest, StarCraft, Counter Strike, Diablo
• Hundreds of game worlds all over the Internet
• Each world supports thousands of players
5
Multiplayer Game Design Issues
Graphics and animation quality improve by leaps and bounds• Only essential data must be delivered
• Network could become the bottleneck Advances made in
• Improving response time
• Maintaining consistent game state Less work done to improve networking
infrastructure
8
Peer-to-Peer Architecture
Pros• Optimal response time
• Potential for interest management
Cons• Lot of redundant communication
• Doesn’t scale well
• Poor administrative control
11
Client Server Architecture
Pros• Scales reasonably well
• Companies can retain administrative control
Cons• Server eventually becomes a bottleneck
• Static topology
• Suboptimal position of server with respect to clients
13
Mirrored Server Architecture
Pros• Scales well
• Uniformity in response time
• Allows administrative control
Cons• Redundant communication
• Static topology
• Inconsistent game states at servers
14
Objectives
Build a generic network infrastructure for multiplayer games• Minimize redundant data communication
• Dynamic and self-adjusting in the face of failure Demonstrate usefulness of overlay network of
active nodes• Allows design of generic middleware for both new and
legacy applications
• Can also be used for related applications like distributed simulations
15
Gaming Infrastructure (Dynamic Multicast)
Build a multicast tree spanning all player nodes• Based on some metric, such as link latency
Select a node located “centrally” with respect to all tree nodes• Mark this as the root or server
17
An Example
Root or Server
AggregationAggregation + DuplicationDuplication
Deaggregation
Deaggregation
18
Features of the Infrastructure
Overlay active network• Comprises of all adapter nodes, including player
nodes
• Requires state information to be maintained
Dynamism of the infrastructure• Every active node monitors network conditions
periodically
• If current tree structure is found to be sub-optimal
• Modify the tree
• Relocate the root to a suitable position
19
What are the gains ?
Number of packet transmissions reduced• Decreased work at routers
Tree is fault tolerant• Sensitive to changes in network conditions
No static server More uniform response time
20
Implementation
Prototype game infrastructure built• Target game DOOM, running on Linux
• Peer-to-peer model, uses UDP
• 4 player limit
• Game proceeds in lock-step
• Active Networks Execution Environment
• ANTS (Active Node Transfer System)
• Maintains a cache at every node for storing packets
• “IPcept” kernel module used for transparent proxying and masquerading of sockets
21
Tree Construction and Monitoring
Initial tree formation• Statically built
• Each active node registers with the root
• Branch nodes perform aggregation and duplication of “active” DOOM packets
• Player (client) nodes perform deaggregation
• ANTS routing table at every node
22
Network Monitoring
Each node “pings” neighbors periodically to get latency information; sends info to root
Root calculates optimal tree and optimal position of root in that tree
Tree replaced if necessary• Root, and other adapters, relocated
• Packets routed through the new tree
23
Tree Computation
Optimal spanning tree: Steiner tree problem (NP-complete)
Calculate optimal source based shortest path tree using Dijkstra’s algorithm
Place root at center of tree
24
Performance Evaluation – Overhead due to Middleware
Overhead introduced by the middleware layer• Two active players, single link
• Average = 4.1 msec
• 93% of packets experience lower than average latency
• Median = 1.75 msec
• Three nodes in a chain; end nodes are players, middle node is root
• Median at players = 1.85 msec
• Median at root = 1.5 msec
• Periodic spikes in overhead due to our network monitoring
25
Performance Evaluation – Overhead of Topology Change
Old Root New Root
Typical overhead of root relocation: 100-200 msec
Maximum overhead recorded: ~ 700 msec
26
Performance Evaluation
Simulation of the active gaming architecture; taking measurements for large graphs
Network traffic: total number of packets over links during a game step
Tree quality: average distance between player nodes
27
Simulation
Used Georgia Tech topology generator to generate random graphs (250 nodes):• 2 Transit-Stub graphs (Internet-like topology)
• 1 Random graph using Waxman model
• 1 Three-level Hierarchical graph
All nodes considered active Multicast group size varying from 2 to 30,
selected randomly; readings from 100 instances of each group size
28
0
1000
2000
3000
4000
5000
6000
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Number of Players
Nu
mb
er
of
Pac
kets
Peer to Peer
Client Server
Multicast
Network Traffic Comparison
99% Confidence Intervals
29
Network Traffic Comparison (Client-Server vs Dynamic Multicast)
0
50
100
150
200
250
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Number of Players
Nu
mb
er
of
Pac
ke
ts
Client Server
Multicast
99% Confidence Intervals
30
Average Player-to-Player Distance
0
20
40
60
80
100
120
140
160
180
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Number of Players
Ave
rag
e D
ista
nc
e
Peer to Peer
Multicast99% Confidence Intervals
31
Related Work Panda and Conductor – LASR, UCLA
• Application-transparent adaptation Gathercast – [He2002]
• Packet aggregation paradigm Multicasting using active networks; e.g. [Lehman98] MiMaze gaming architecture – INRIA, France
• Uses IP multicasting A distributed multiplayer game server system –
[Cronin2001], University of Michigan• Mirrored-server architecture
• Reliable multicasting, clients connect to nearest server
• Requires re-modeling of the game
32
Future Work
Make the system more fault tolerant; recover from failure of tree root
Replicate server functionality• Peer to peer communication between servers
• Reduces chance of bottleneck
Build individual game clusters based on proximity
33
Conclusion Demonstrated that adaptation of packet distribution
infrastructure can improve game performance Proved the feasibility of using active networks to
adapt game architectures• Both new and legacy games• By extension, this can be used for other classes of
distributed applications• Performance impact will be even greater for non-real time
applications Proved that dynamically modifiable trees and
relocatable servers are practical• On-the-fly modifications have very small impact on
performance
34
Thank YouThank You
Further questions ?
Email: [email protected]
Web Page: http://www.cs.ucla.edu/~vrama