Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
SIDNEY SHEK • ARCHITECT • ATLASSIAN • @SIDNEYSHEK
Event sourcing and CQRSfrom the trenches
Universe of Users
Id Name Username APIKey1 Homer homer d0a2 Bart bart f003 Maggie maggie baa
users
Universe of Users
Id Name Username APIKey1 Homer homer d0a2 Bart bart f003 Lisa Jr maggie baa
users
Universe of Users
Id Name Username APIKey1 Homer homers d0a2 Bart bart f003 Lisa Jr maggie baa
users
Universe of Users
Id Name Username APIKey1 Homer homer d0a2 Bart bart f003 Maggie maggie baa
users
events
Seq Event Time
123 SetUsername(3, Maggie) 0
Universe of Users
Id Name Username APIKey1 Homer homer d0a2 Bart bart f003 Lisa Jr maggie baa
users
events
Seq Event Time
123 SetUsername(3, Maggie) 0
124 SetName(3, Lisa Jr) 10
Universe of Users
Id Name Username APIKey1 Homer homers d0a2 Bart bart f003 Lisa Jr maggie baa
users
events
Seq Event Time
123 SetUsername(3, Maggie) 0
124 SetName(3, Lisa Jr) 10
125 SetUsername(1, homers) 15
Universe of Users
Id Name Username APIKey1 Homer homers d0a2 Bart bart f003 Lisa Jr maggie baa
users
events
Seq Event Time
123 SetUsername(3, Maggie) 0
124 SetName(3, Lisa Jr) 10
125 SetUsername(1, homers) 15
Id Name Derived1 Homer Homer12 Bart Bart23 Lisa Jr Lisa Jr3
users_new
Universe of Users
Id Name Username APIKey1 Homer homers d0a2 Bart bart f003 Lisa Jr maggie baa
users
events
Seq Event Time
123 SetUsername(3, Maggie) 0
124 SetName(3, Lisa Jr) 10
125 SetUsername(1, homers) 15
Id Name Derived1 Homer Homer12 Bart Bart23 Lisa Jr Lisa Jr3
users_new
Our Identity System requirements
Our Identity System requirements
• Users, groups and memberships
Our Identity System requirements
• Users, groups and memberships• Searching for users
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
• High volume low latency reads
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
• Incremental synchronisation
• High volume low latency reads
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
• Incremental synchronisation
• Audit trails for changes
• High volume low latency reads
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
• Incremental synchronisation
• Audit trails for changes
• High volume low latency reads
• Highly available• Disaster recovery• Zero-downtime upgrades
Our Identity System requirements
• Users, groups and memberships• Searching for users
• Retrieve by email
• Incremental synchronisation
• Audit trails for changes
• High volume low latency reads
• Highly available• Disaster recovery• Zero-downtime upgrades
• Testing with production-like data
Evolving thearchitecture
DynamoDB
SaveAPI
Core app
EventStream
Services
REST callse.g. Add User
Query viewsQuery viewsQuery views
Command QueryResponsibilitySegregation
Domain (bus. logic)
Database
UI
User
User Search for users
Users and groups
Command (write logic)
Database
UI
User
Query viewsQuery viewsQuery views
User
Search for users
Users and groups
Command (write logic)
UI
User
Query viewsQuery viewsQuery views
User
Search for users
Users and groups
ElasticSearch
DynamoDB
Commands
Core app
EventStream
Services
REST callse.g. Add User
Query viewsQuery views
Query sync
EventStreamEventStream
Query views
DynamoDB
Commands
Core app
EventStream
Services
REST callse.g. Add User
Query viewsQuery views
Query sync
EventStreamEventStream
Query views
KinesisLambda
ElasticSearch
Events
DynamoDB
Commands
Groups
EventStream
Services
REST callse.g. Add User
Query viewsQuery views
Query sync
EventStreamEventStream
Query views
KinesisLambda
PlatformEvents
Kinesis
ExternalEvents
EventTxf
Users
Events as an API
UserAdded(id, name, email1)
Insert / Update Delta ‘Set’ events
UserUpdated(id, email = Some(email))
UserNameSet(id, name)UserEmailSet(id, email1)
UserEmailSet(id, email2)
Fits nicely with CRUD + PATCH
Assume insert before update
Encourages idempotent processing
Minimally sized events to avoid conflict
Single code path for query sync
vs
Sharding forthroughput
Multiple streamsSingle stream
Transactions andconsistent data
resolution Better availability vs consistencycompromise
Rules for splitting streams
1. Place independent events on different streams
Rules for splitting streams
1. Place independent events on different streams
2. Split streams by event type and unique Id
Rules for splitting streams
1. Place independent events on different streams
2. Split streams by event type and unique Id
3. Identify the ‘transactions’ you really need
Rules for splitting streams
1. Place independent events on different streams
2. Split streams by event type and unique Id
3. Identify the ‘transactions’ you really need
4. Use hierarchical streams to maximise number of streams
Rules for splitting streams
1. Place independent events on different streams
2. Split streams by event type and unique Id
3. Identify the ‘transactions’ you really need
4. Use hierarchical streams to maximise number of streams
5. Splitting and joining streams later is possible
But… no guaranteed order between streams
Query views get populated eventually
Query views get populated eventually
• A field should only be updated by a single event stream
Query views get populated eventually
• A field should only be updated by a single event stream
• No foreign key constraints
Query views get populated eventually
• A field should only be updated by a single event stream
• No foreign key constraints
• In general, unique or data constraints ‘enforced’ on write
Let go oftransactions
and consistency
Why do we need transactions?
Why do we need transactions?
• Enforce business constraints e.g. uniqueness
Why do we need transactions?
• Enforce business constraints e.g. uniqueness
• Guaranteed to see what I just wrote
Write and Read Consistency
But CAP theorem…
CAP or PACELC?
CAP or PACELC?
During a network Partition, choose betweenAvailability versus Consistency
CAP or PACELC?
During a network Partition, choose betweenAvailability versus Consistency
Else choose betweenLatency versus Consistency
There is a middle ground…
1. Read at seq X2. Run business rule
3. Stream must be at X to write
Check-and-Set writes
Optional forced reads
Query view must be at stream seq X
Potential false positives
Enforce timed waits
Do not use as default
Potential increased latency
Smaller streams alleviate problems
Potentially conflicting events on same stream
Tokens to emulate transactions and consistency
User: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Tokens to emulate transactions and consistency
• Returned on read and write via ETagUser: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Tokens to emulate transactions and consistency
• Returned on read and write via ETag
• Pass as request header for:
User: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Tokens to emulate transactions and consistency
• Returned on read and write via ETag
• Pass as request header for:• Condition write (‘transaction’)
User: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Tokens to emulate transactions and consistency
• Returned on read and write via ETag
• Pass as request header for:• Condition write (‘transaction’)• Force query view update (‘consistency’)
User: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Tokens to emulate transactions and consistency
• Returned on read and write via ETag
• Pass as request header for:• Condition write (‘transaction’)• Force query view update (‘consistency’)• Caching
User: homer (id 4)
All Users: Seq 100
User 4: Seq 23
Using tokens to enforce state
Client Core App
Using tokens to enforce state
Client Core App
Create User A
Using tokens to enforce state
Client Core App
Create User A
Token TU1
Using tokens to enforce state
Client Core App
Create User A
Create Group B
Token TU1
Using tokens to enforce state
Client Core App
Create User A
Create Group B
Token TU1
Token TG1
Using tokens to enforce state
Client Core App
Create User A
Create Group B
Add User A to Group B (tokens TU1, TG1)
Token TU1
Token TG1
Using tokens to enforce state
Client Core App
Create User A
Create Group B
Add User A to Group B (tokens TU1, TG1)
Force query view to TU1, TG1
Token TU1
Token TG1
Using tokens to enforce state
Client Core App
Create User A
Create Group B
Add User A to Group B (tokens TU1, TG1)
Force query view to TU1, TG1
Token TU1
Token TG1
Token TG2
Conflict resolutioninstead of
transactions
Resolve conflicts on query
• Conflicting events on same stream
Handles multi-region writes
Needs to be implemented on all query nodes
No falsely failed transactions
Potential temporary glitches
• Defined resolution algorithm on replay• e.g. Last Write Wins• Convergent/Commutative Replicated Data Types
(CRDTs)
More resilient query views
Summary
Key takeaways
• Start small and challenge everything!
Key takeaways
• Start small and challenge everything!
• Incremental architecture for incremental demos
Key takeaways
• Start small and challenge everything!
• Incremental architecture for incremental demos
• Think “Events as an API”
Key takeaways
• Start small and challenge everything!
• Incremental architecture for incremental demos
• Think “Events as an API”
• Accept weaker transactions and eventual consistency
“We should using event sourcing more
than we do”Martin Fowler (very loosely paraphrased)
event sourcing lib:bitbucket.org/atlassianlabs/eventsrc