Upload
buiminh
View
215
Download
0
Embed Size (px)
Citation preview
October 5-7 in Chapel Hill, NC
Data Modeling Zone 2015
Page i
Monday, October 5 fundamental (for all audiences),
intermediate, advanced
7:00-9:00 Breakfast in the Hill Ballroom
8:30-11:30 Data Modeling
Fundamentals
Steve
Hoberman, Steve Hoberman
& Associates,
LLC
Old Well, Pg 1
Data Quality
for Data
Modelers
Sue Geuens, President of
DAMA
International
North Parlor,
Pg 1
Making Your
Unstructured
Data Come
Alive
Bill Inmon, Forest Rim
Technologies
Chancellor, Pg
2
About
Cassandra
Open Software
Integrators
Alumni, Pg 3
Concurrent,
Integrated,
Value
Sensitive Data
Design - A
Novel
Approach to
Data and
Data Model
Design
Richard
Ordowich, STS
Associates Inc.
South Parlor,
Pg 3
11:45-12:00 Welcome and Announcements Hill Ballroom
12:00-1:00 KEYNOTE: Data Modeler 2020 – Future of Data Modeling Panel,
Michael Blaha, Deborah Henderson, Bill Inmon, Dave Wells, Graham Witt
Hill Ballroom, Pg 4
1:00-2:15 Lunch in the Hill Ballroom
2:15-3:15
UML Made
Easy!
Norman
Daoust, Daoust
Associates
Chancellor, Pg
4
DMBOK
Overview
Deborah
Henderson, Broadstreet Data
Old Well, Pg 5
Enterprise
Conceptual
Data Modeling
Brian Shive, Microsoft
Alumni, Pg 5
Normalization -
The Achilles
Heel of Data
Modeling
Gordon
Everest, University of
Minnesota
North Parlor,
Pg 6
Case Study:
Data
Warehousing
with SCD
Yoshihiko
Hoshi and
Hiroshi
Yagishita, Future
Modeling
Technologies
South Parlor,
Pg 6
3:15-3:45 Afternoon Snacks in the Colonnade
3:45-4:45 Mastering
Master Data
Communication
Andrew Kapp, Westar Energy,
Inc.
South Parlor,
Pg 7
Ensemble
Modeling
Hans Hultgren, Genesee
Academy
Chancellor, Pg
8
Tools and
Techniques for
21st Century
Data Modeling
Jeremy Posner, Synechron
Old Well, Pg 8
Prove it!
Verifying the
worth of data
modeling with
ROI analysis
Kim Medlin, Wells Fargo
Alumni, Pg 9
Data
Integration
Tips and
Tricks
Bob Conway, Information
Engineering
Associates
North Parlor,
Pg 9
5:00-6:30 Welcome Reception on the Terrace
Data Modeling Zone 2015
Page ii
Tuesday, October 6 fundamental (for all audiences),
intermediate, advanced
7:00-9:00 Breakfast in the Hill Ballroom
7:45-8:15
SIGs
Protecting
Personal Data
From Hackers
Cathy Nolan,
Allstate, and
Ashley Wilson, Attorney at Law
South Parlor,
Pg 10
How to talk
Data to Non-
Data People
Jill Camper, DST Systems,
Inc.
Alumni, Pg 11
Data Modeler
to Data
Scientist - A
New Maturity
Approach
Sanjay Shirude, ACCEL BI
North Parlor,
Pg 11
Natural vs.
Surrogate
Primary Keys in
Logical Data
Models
Gary Whitney, Microsoft
Old Well, Pg 12
Session to be
announced in
July!
8:30-11:30 Facilitation
and Training
Skills for Data
Professionals
Artie Mahal, ASM Group Inc.
Old Well, Pg
12
Semantic
Structure
Analysis and
Dimensional
Modeling
Dave Wells, Infocentric
Chancellor, Pg
13
Data
Governance for
Data Modelers
Workshop
Deborah
Henderson, Broadstreet Data
North Parlor,
Pg 14
About MongoDB
Open Software
Integrators
Alumni, Pg 15
UML in Depth
Norman
Daoust, Daoust
Associates
South Parlor,
Pg 15
11:45-12:00 Welcome and Announcements Hill Ballroom
12:00-1:00 KEYNOTE: To be announced in April!!
Hill Ballroom
1:00-2:15 Lunch in the Hill Ballroom
2:15-3:15
The Journey to
an Enterprise
Data Model
Sherri Adame, Premier Farnell
Alumni, Pg 16
Evaluating
Data Modeling
Tools? Helping
You to Decide
George
McGeachie, Metadata Matters
South Parlor,
Pg 16
Crossing the
Unstructured
Barrier
Bill Inmon, Forest Rim
Technologies
Chancellor, Pg
17
Modern Data
Architecture? Or
Fresh Messaging
for Familiar
Concepts?
Eddie Sayer, Teradata
Old Well, Pg 17
Advanced SQL
Queries
Michael
Blaha, Modelsoft
Consulting
North Parlor,
Pg 18
3:15-3:45 Afternoon Snacks in the Colonnade
3:45-4:45 Accounting:
The Essence
David Hay, Essential
Strategies, Inc.
Alumni, Pg 19
Using ISO 8000
to Measure the
Quality of
Master Data
Peter R.
Benson, Electronic
Commerce Code
Management
Association
(ECCMA)
South Parlor,
Pg 19
From
Operational to
Analytics: An
Exploration of
Data Model
Designs for
Software
Business
Applications
Ralph
Hollinshead and
Goran Stanisic, SAS Institute
North Parlor,
Pg 20
Understaffed
with data
modelers? How
to train
developers as
apprentice data
model reviewers
Sally
Greenwood, TDS
Telecom
Old Well, Pg 21
MapReduce vs.
OLAP – Do
These Two
Worlds
Collide?
Dave Wells, Infocentric
Chancellor,
Pg 22
5:00-6:30 Betting on Data Modeling with Wild Bill’s Casino! (Old Well)
Data Modeling Zone 2015
Page iii
Wednesday, October 7 fundamental (for all audiences),
intermediate, advanced
7:00-9:00 Breakfast in the Hill Ballroom
8:30-11:30 Data Vault
Fundamentals
and Workshop
Hans Hultgren, Genesee
Academy
North Parlor,
Pg 22
Corporate
Dictionary
Workshop
Peter R.
Benson, Electronic
Commerce Code
Management
Association
(ECCMA)
South Parlor,
Pg 23
About Hadoop
Open Software
Integrators
Alumni, Pg 24
Data Modeling
for Sustainable
Systems
Graham Witt, Ajilon
Chancellor, Pg
24
Advanced
Data Modeling
Challenges
Workshop
Steve
Hoberman, Steve Hoberman
& Associates,
LLC
Old Well, Pg
25
11:45-12:45 Competency
Assessment for
the Data
Professional
Artie Mahal, ASM Group Inc.
Alumni, Pg
26
Six Habitual
Architecture
Mistakes and
How to Avoid
Them
Eddie Sayer, Teradata
Old Well, Pg
27
Conducting
Data
Modeling
Project
Meetings
Gordon
Everest, University of
Minnesota
South Parlor,
Pg 27
Case Study:
Roadmap to an
Enterprise
Logical Data
Model
Missy
Wittmann, American Family
Chancellor, Pg
28
Implementing
Data Vault in
a Columnar
Database
Petr Olmer, GoodData
North Parlor,
Pg 28
12:45-1:45 Lunch in the Hill Ballroom
1:45-4:45
FoCuSeD Data
Modeling -
facilitated data
modeling
Gary Rush, MGR Consulting,
Inc.
Chancellor,
Pg 29
Writing effective
business rules -
a practical
method
Graham Witt, Ajilon
North Parlor,
Pg 29
The Data
Modeler’s Road
to the Certified
Data
Management
Professional
(CDMP)
Patricia Cupoli,
CCP, CDMP,
CBIP, DAMA
International
Alumni, Pg 30
Data Modeling
by Example -
Introduction
and Workshop
Marco Wobben, BCP Software
Old Well, Pg
31
Just 375 days
till DMZ
2016!!
Join us Oct
17-19 in
Portland,
OR!
Data Modeling Zone 2015
Page 1
Data Modeling Fundamentals
Steve Hoberman, Steve Hoberman &
Associates, LLC
Assuming no prior knowledge of data
modeling, we start off with an exercise that
will illustrate why data models are essential to
understanding business processes and
business requirements. Next, we will explain
data modeling concepts and terminology, and
provide you with a set of questions you can ask
to quickly and precisely identify entities
(including both weak and strong entities), data
elements (including keys), and relationships
(including subtyping). We will also explore
each component on a data model and practice
reading business rules. We will discuss the
three different levels of modeling (conceptual,
logical, and physical), and for each explain
both relational and dimensional mindsets.
Steve Hoberman taught his first data modeling
class in 1992 and has trained more than
10,000 people since then, spanning every
continent except Africa and Antarctica. Steve is
known for his entertaining and interactive
teaching style (watch out for flying candy!),
and organizations around the globe have
brought Steve in to teach his Data Modeling
Master Class, which is recognized as the most
comprehensive data modeling course in the
industry. Steve is the author of seven books on
data modeling, including the bestseller Data
Modeling Made Simple. His latest book, Data
Modeling for MongoDB, presents a streamlined
approach to data modeling for NoSQL
solutions. One of Steve’s frequent data
modeling consulting assignments is to review
data models using his Data Model Scorecard®
technique. He is the founder of the Design
Challenges group, recipient of the 2012 Data
Administration Management Association
(DAMA) International Professional
Achievement Award, and highest rated
presenter at Enterprise Data World 2014.
Data Quality for Data Modelers
Sue Geuens, President of DAMA
International
Data Quality is not generally a priority when
you start data modeling. The focus is on
defining your conceptual model or
understanding of the business requirements;
parlaying that into a decent logical model and
then handing it over to the DBAs or physical
DB modelers.
Unfortunately, that focus is ignoring the fact
that Data Quality is a primary driver in being
able to use the data the business has captured,
created and stored to provide meaningful
business intelligence that drives accurate and
timely business decisions.
Sue’s almost 20 years in data stands her in
good stead. She has been involved in many
projects of data modeling, designing and
understand very large databases and systems;
has run a couple of data quality projects and
most recently finished a two year contract to
implement a Data Governance program at
SA’s largest mobile operator. She is currently
on a six months project at the same company
designing and implementing a KPI Metric
model for the Commercial Operations division
(including Online and Self Service) and this
project has managed to unearth many data
quality anomalies.
This workshop will help data modelers
understand how to consider data quality
actually MUST fit into any data model – be it
at the conceptual level or right down in the
nuts and bolts of the physical model.
Data Modeling Zone 2015
Page 2
Typical discussions will be around the
dimensions of data quality, how to keep strict
controls on the data as you start to develop
your models, understanding how to get the
business to specify their data at a “Fit for
Purpose” level – enabling data modelers to
manage their models and to build them
keeping the quality of the data as a key
priority. Further discussions will include why
Sue believes that primary keys, foreign keys,
clearly defined relationships and data
attributes all contribute to appropriate data
quality. Finally, a discussion on measuring
how your data models stand up against good
data quality and governance.
You should leave this workshop with a clear
understanding of what changes you may need
to bring to your future data modeling efforts to
improve the quality of the data your business
requires to make solid and innovative business
decisions for the future.
Sue is a Senior Data Management Specialist
who has been customer facing for the past 18
years. During this time she has focused
specifically in the financial (banking,
insurance, pensions) and telecommunications
sectors, gaining immense knowledge and
expertise in both. Each year she attends a
number of Data Management conferences
giving presentations both locally and overseas.
Her initial step into the world of data came
about in the form of designing and
implementing the first registration system for
the NHBRC. Since then she has moved on to
various businesses and enterprising, always
working toward Data Quality and Integrity,
which is her passion. Sue was elected President
of DAMA SA during January 2009 and was
the driving force behind the Inaugural Meeting
which was held on 18th February 2009 at
Vodaworld in Midrand. Just completed
implementing Data Governance at a large SA
Telco, Sue has moved her focus to responding
to the many challenges facing SA companies
with their data. Sue has just been voted in as
the DAMA I President for the 2014/2015 term.
Making Your Unstructured Data
Come Alive
Bill Inmon, Forest Rim Technologies
80% of the data in the corporation is
unstructured. Yet nearly all of the corporate
decisions are made on the basis of structured
data. This half day presentation addresses
how you can start to incorporate unstructured
data in the corporate decision making process.
This presentation entails the various aspects
of textual disambiguation and explores how
textual disambiguation is used to transform
textual data into structured data that can then
be analyzed by standard analytical tools.
Bill Inmon – the “father of data warehouse” –
has written 53 books published in nine
languages. Bill’s latest adventure is the
building of technology known as textual
disambiguation – technology that reads raw
text in a narrative format and allows the text to
be placed in a conventional data base so that it
can be analyzed by standard analytical
technology, thereby creating unique business
value for Big Data/unstructured data. Bill
Data Modeling Zone 2015
Page 3
was named by ComputerWorld as one of the ten
most influential people in the history of the
computer profession. For more information
about textual disambiguation refer to
www.forestrimtech.com.
About Cassandra
Open Software Integrators
This is a basic introduction to Cassandra
including how it works, ideal use cases,
counter-indicated use cases and best practices.
We'll show you:
how to install and operate Cassandra
how to operate the Cassandra APIs
(Java, REST)
how to monitor Cassandra.
The training will be conducted by Open
Software Integrators, a Big Data consulting
and services company specializing in Hadoop,
Cassandra, MongoDB and other NoSQL
technologies. OSI focuses on executive strategy,
initial install, design and implementation;
helping companies transition from legacy
systems into a data-driven organization.
Concurrent, Integrated, Value
Sensitive Data Design - A Novel
Approach to Data and Data Model
Design
Richard Ordowich, STS Associates Inc.
One of the significant challenges in designing
a data model is the human factor. Current
data modeling best practices concentrate on
the technical challenges. Addressing the
human factors requires techniques mostly
unfamiliar to designers. Human factors that
affect data model design include:
Enterprise level participation
Common interests
Autonomy
Consensus
Cooperation
Accountability
Ownership
Politics
We refer to these human factors as Values.
These Values impact a data model design in
various ways:
Acceptance of the model
Ownership of the model
The suitability of the model to satisfy
all needs and expectations
The extensibility of the model
The adaptability of the model
Many describe the human factors as the gap
between the business users and IT. It is also
frequently referred to as the politics affecting
the design or the “elephant in the room”. Few
of the current data model design best practices
adequately address these human factors or
Values.
We researched the history of data modeling to
understand the successes and failures of data
models. We studied other domains where
design principles are similar to data model
design. We identified human factor best
practices used in these domains and developed
a series of human factors design practices that
should be included in data modeling.
In the 1970’s, Charles Bachman introduced
the Three Schema Approach for data models.
The three schemas consisted of an external
model, a conceptual model and an internal or
physical model. The external model represents
the business user viewpoints.
In our work in human factors design we focus
on the external model.
In this session we will share with you the best
practices we have adapted from other domains
that will help to improve data and data model
designs. These best practices will help you
identity the Values of your users and factor in
those Values into your design. These best
Data Modeling Zone 2015
Page 4
practices will also help you bridge the
business/IT gap and help you work with the
business users as collaborators and owners of
the data model design. We will provide you
with a roadmap that shows you how to adopt
these best practices in your organization. We
call this the Concurrent, Integrated, Value
Sensitive Data Design.
Richard Ordowich is an independent
consultant with more than 30 years of
experience in IT including data governance,
data standards, data warehouse and data
management. Richard has an in-depth suite of
skills including project management, quality
assurance, architectural design, software
development, hardware development, business
management, business analysis, market
research, operational workflow design, product
development and IT Management. In his career
Richard has designed innovative hardware
solutions, large scale trading systems and
managed new startup companies and software
development teams. Richard has designed data
models including a common data warehouse
model used in government.
Richard has designed and implemented data
governance and data quality programs and
provides guidance and mentoring to business
managers and IT in the areas of data
governance, data quality and data warehouse.
Richard’s expertise includes strategic vision
along with a hands-on approach to problem
solving. Richard has broad industry experience
in utilities, financial services, government,
manufacturing, media, and
telecommunications third party services.
Richard is also an experienced helicopter and
fixed wing pilot.
Data Modeler 2020 – Future of
Data Modeling Panel
Michael Blaha, Deborah Henderson, Bill
Inmon, Dave Wells, Graham Witt
The processes, roles, and tools involved in
building applications are changing rapidly,
due primarily to big data, NoSQL, and very
shortly the Internet of Things. As our
environment changes, data modelers may need
to refine skills and techniques. Get a glimpse
into the future from five experts and ask your
questions!
UML Made Easy!
Norman Daoust, Daoust Associates
An introduction to the thirteen UML diagram
types and their relationship to data modeling.
We’ll focus on those most relevant to data
professionals. The presentation includes
examples of each of the thirteen diagram types
from a case study.
Attendees will learn:
which UML diagram type is closest to
a data model
which UML diagram type includes
entity names from your data model
which UML diagram type visually
illustrates the allowable state changes
of an entity from your data model
when to use each of the diagram types
Norman Daoust founded his consulting
company Daoust Associates,
www.DaoustAssociates.com in 2001. His
clients have included the Centers for Disease
Control and Prevention (CDC), the Veteran’s
Health Administration, the Canadian Institute
for Health Information, a Fortune 500 software
Data Modeling Zone 2015
Page 5
company, and several start-ups. He has been
an active contributor to the healthcare industry
standard data model, the Health Level Seven
(HL7) Reference Information Model (RIM)
since its inception. He enjoys introducing data
and process modeling concepts to the business
analysis community and conducting business
analysis training courses. Norman’s book,
“UML Requirements Modeling for Business
Analysts” explains how to adapt UML for
analysis purposes.
Data Management Body of
Knowledge (DMBOK) Overview
Deborah Henderson, Broadstreet Data
The quick 101 course through the DMBOK for
those who need to consider how they might or
should use it as an operating framework for
data management.
An overview will be followed by techniques on
using the DMBOK as a guide for evaluating
current state and priorities, and where your
gaps are with staff skills and management
ownership.
DMBOK2 the new revision due in 2015 will
also be cited.
Deborah Henderson, B.Sc., MLS, PMP, CDMP
is the Data Governance Practice Manager for
Broadstreet Data in Toronto and teaches data
governance fundamentals classes publicly and
privately. She is Program Director for the
DAMA-DMBOK (Data Management Body of
Knowledge), a global effort going on since 2005.
With over 25 years in data management, she
consults in data governance in the energy,
capital markets, heath and automotive sectors.
Enterprise Conceptual Data
Modeling
Brian Shive, Microsoft
I will present a conceptual Enterprise Data
Model that is a union of models from IBM,
Boeing, AT&T and Microsoft. I will
demonstrate how views of the EDM can be
used for IT planning, scoping projects, defining
data requirements for IT packages, defining
data integration and mapping to conceptual
value chain models.
Learn how to communicate the data model to
business users and IT staff.
We will walk through the EDM in detail with
scenarios to demonstrate the value of the
EDM.
Brian started his data modeling career in the
late 1970s while working as a consultant to the
relational database gurus at IBM. Brian
learned from John Zachman at IBM how to
use the discipline of engineering when
designing data. Brian works at Microsoft
where during his 18 years he has served as
Microsoft Corporate Data Administrator,
Enterprise Architecture Lead Information
Architect, Principal Architect, Development
Manager and most-fun-one Developer. He spent
16 years with Boeing IT. Brian also worked as
Solar Energy Designer, Executive of Boy Scouts
of America, musician, comedian and poet and
Data Modeling Zone 2015
Page 6
janitor. Brian and his wife and two children
live in the Seattle area. He teaches Aramaic in
his Methodist church and can be seen on
YouTube sounding at times like Jimi Hendrix.
He is working on a book of poetry and loves
teaching data modeling, database design and
data integration. The human brain and the
behavior it elicits have provided Brian with
years of study in neurology, psychology,
sociology and history. He is the author of the
novel, “Data Engineering”.
Normalization - The Achilles Heel
of Data Modeling
Gordon Everest, University of Minnesota
Many database professionals think they know
all about normalization, but when it gets down
to doing it, or explaining it to others in their
organization, they often stumble. This
workshop will test your knowledge, work on
some exercises, and learn (or reinforce) some
principles about record-based (ER/relational)
database design. This is an opportunity to
strengthen your own level of knowledge.
Unfortunately, there are also several myths
and misunderstandings surrounding the
concept that confuse and confound both
novices and experts. This session explores:
What is normalization, its history and
evolution, why it is important
The basic principles behind
normalization
What we need to know to perform
normalization
Learn and apply a practical, effective
method for recognizing and correcting
violations of normal forms
The consequences of not identifying
and correcting violations of the normal
forms
Why a DBMS or data modeling tool
cannot help data modelers produce a
normalized design
Since good design requires
normalization, what is
denormalization, and why do we
consider it?
Dr. Everest is Professor Emeritus of MIS and
Database in the Carlson School of
Management at the University of Minnesota.
With early “retirement”, he continues to teach
as an adjunct. His Ph.D. dissertation at the
Univ of Pennsylvania Wharton School entitled
“Managing Corporate Data Resources” became
the text from McGraw-Hill, “Database
Management: Objectives, System Functions,
and Administration” in 1986 and remained in
print until 2002!
Gordon has been teaching all about databases,
data modeling, database management systems,
database administration, and data
warehousing since he joined the University in
1970. Students learn the theory of databases,
gain practical experience with real data
modeling projects, and with hands-on use of
data modeling tools and DBMSs. Besides
teaching about databases, he has helped many
organizations and government agencies design
their databases. His approach transfers
expertise to professional data architects within
those organizations by having them participate
in and observe the conduct of database design
project meetings with the subject matter
experts. He is a frequent speaker at
professional organizations such as DAMA.
Data Modeling Zone 2015
Page 7
Case Study: Data Warehousing
with SCD
Yoshihiko Hoshi and Hiroshi Yagishita,
Future Modeling Technologies
We will show you a case study of
implementing theoretical data warehouse
models into the real business field, including
how Slowly Changing Dimensions (SCDs) are
implemented in Japan. Also we will introduce
our case study of segmenting customers
(Customer Tags) in a women’s cosmetics
company.
Yoshihiko Hoshi has over 30 years experience
in data modeling, and is currently working for
a Japanese Major Bank. In Japan, he designed
the de-fact standard data model for financial
derivatives, Risk Management and Market
Data.
Hiroshi Yagishita is a system modeler and
consultant with over 30 years of application
development experience. Initially developed a
mainframe’s operating system, and currently a
database application specialist, especially
focusing on modeling technologies. Now
working for “Future Modeling Technologies”,
an independent solution provider in Japan.
Also a DAMA-Japan Member.
Mastering Master Data
Communication
Andrew Kapp, Westar Energy, Inc.
During software development and as
developers come and go, data modelers often
lack the time, software and methods to
adequately document and communicate
critical model details such as master data’s
lineage and metadata. In addition, as an
application or database matures, good
documentation can become increasingly
difficult to maintain and, without good and
readily accessible documentation a database
can become a mix of confusion and spaghetti.
This session will discuss methods to make
data (particularly master data) more intuitive
to manage and track for all of IT.
Andrew Kapp is the Enterprise Information
Architect for Westar Energy, Inc., a Kansas-
based power generation and delivery utility. He
has 14 years of experience in data management
and architecture on a diverse variety of
industries and data. Andrew specializes in
modeling techniques to improve information
awareness and accessibility among the IT and
data user communities.
Data Modeling Zone 2015
Page 8
Ensemble Modeling
Hans Hultgren, Genesee Academy
Ensemble Modeling represents a family of
modeling approaches that share a common
purpose and a common modeling paradigm.
Ensemble forms address our need for data
integration, historization, auditability and
modeling agility. This session will cover the
need, the approach, the underlying premise
and the current flavors of Ensemble Modeling.
Attendees can expect to understand why
organizations should consider Ensemble
Modeling for their DWBI program.
President at Genesee Academy and a Principal
at Top Of Minds AB. Data Warehousing and
Business Intelligence educator, author,
speaker, and advisor.
Currently working on Business Intelligence
and Enterprise Data Warehousing (EDW) with
a focus on Ensemble Modeling and Data Vault.
Primarily in Stockholm, Amsterdam, Denver,
Sydney and NYC.
Published data modeling book “Modeling the
Agile Data Warehouse with Data Vault” which
is available on Amazon websites in both print
and Kindle e-reader versions.
Specialties: Information Management and
Modeling, Ensemble Modeling, Data Vault
Modeling, Agile Data Warehousing, Education,
e-Learning, Entrepreneurship and Business
Development.
Tools and Techniques for 21st
Century Data Modeling
Jeremy Posner, Synechron
For many years data modeling, which grew up
in the last century, has been bogged down by
the tools that modelers use and slow release
cycles due to highly manual effort at each
stage.
In the meantime, the software development
arena has moved ahead greatly and reaped the
benefits of tooling to deliver faster and more
iteratively. We call it “Agile” but you don’t
have to “be” Agile in order to reap these
rewards.
We will present some innovative techniques
that allow your data model to deliver faster,
with higher quality and lower cost. We call
this “21st Century Data Modeling”, and
describe some tools and techniques that
actually allow us to achieve these goals.
You will learn:
why reducing the release cycle is so
vital for the success of an Enterprise
Data Model
how employing continuous integration
/ delivery techniques from software
development benefits the data model
development
how to reduce re-work by
implementing model checks and test
harnesses
output artifacts you can deliver from
the modeling cycle
testing your model for backwards
compatibility in a programmatic way
techniques to deal with breaking
changes, when they absolutely need to
happen.
Data Modeling Zone 2015
Page 9
Jeremy Posner is Senior Director – Data
Strategy and Services at Synechron, and a
Data Architect with 20 years experience mainly
within global financial institutions including
Morgan Stanley, Deutsche Bank, Merrill Lynch
and JP Morgan. He specializes in enterprise
data architecture, modeling, tools, standards
and metadata. Still a hands-on technician, he
strives to apply new practices and technologies
to common data problems with a “different and
better” mindset.
Prove it! Verifying the worth of
data modeling with ROI analysis
Kim Medlin, Wells Fargo
In software development organizations where
the culture of data modeling isn’t ingrained,
upper management will probably need to be
convinced that the additional effort required to
model the data is worthwhile to the enterprise.
In these cases, being able to determine the
return on investment (ROI) of data modeling
will be imperative. To do so, you must
understand the general value proposition of
data modeling. In this session, learn how to
justify the time and effort required to
implement data modeling. Appreciate the
benefits of data modeling, such as reduced
maintenance, improved data quality, enhanced
requirements definitions, and a value-added
communications channel. Learn to calculate
the data modeling ROI using approaches such
as cost-benefit, percentage of project savings,
percent of maintenance savings, and percent of
development.
A Data Architect for over 20 years, Kim Medlin
has worked for Fortune 100 companies as well
as high-powered consulting firms. His data
modeling expertise runs the gamut from
healthcare to banking to warehouse
automation. Having held a variety of both
technical and managerial roles, Kim is a
stalwart of data modeling and Data
Architecture evangelism. Kim has worked
directly with Wells Fargo, Keane Consulting,
Xerox, BB&T, and Data General. Kim received
his Mathematics degree (with a concentration
in Computer Science) from Appalachian State
University.
Data Integration Tips and Tricks
Bob Conway, Information Engineering
Associates
Data integration is the cornerstone of data
warehousing (DW) and master data
management (MDM) but has proven to be one
of the more difficult technical challenges. This
presentation provides helpful techniques for
merging data from disparate functional areas
of organizations into a unified semantic data
model. The approach leverages a modular
design and metadata-driven method to provide
a durable, extensible data integration tool kit.
Data Modeling Zone 2015
Page 10
Bob Conway has 30+ years expertise in data
modeling, data architecture and data
warehouse design. As an internal resource and
as a consultant Bob has implemented dozens of
successful DW/BI projects in industries as
diverse as telecommunications, financial
services, health care, manufacturing, retail,
and Oil and Gas. He developed the RAPID®
Architecture and Methodology and teaches
classes on data modeling, architecture, and
agile development. He was an adjunct faculty
member in graduate programs at University of
Denver and University of Colorado. He brings
his rich business and technical experience to
the classroom as meaningful examples. His
presentation style is informative and
entertaining.
Protecting Personal Data From
Hackers
Cathy Nolan, Allstate, and Ashley Wilson,
Attorney at Law
Data Analysts and Data Modelers have a
unique opportunity to preview the amount of
data their company is collecting and to
consider if it is all necessary information to
run the business or just “nice to have”.
Questions to be asked should include: Is this
data secure from both internal and external
hackers? What data should be tagged as
“sensitive” data? Is the company breaking any
privacy laws by storing personal data? Who
within the company has access to personal
data? What happens if there is an external
data breech?
At a personal level, your identity and personal
data is being amassed by data brokers every
time you log on to your laptop, use your cell
phone, access an app, or use your GPS.
Companies are collecting a variety of data
about you, combining it with location
information, and using it to both personalize
their own services and to sell to advertisers for
behavioral marketing. Law enforcement
agencies are tracking your car and insurance
companies are installing devices to monitor
your driving. Clerks are making copies of your
credit cards. And if that wasn’t enough, the
FBI has reported that hackers have been
discovered embedding malicious software in
company computers, opening a virtual door for
criminals to rifle through an organization’s
valuable personal and financial information.
In additional to warning you about the ways
your data can be stolen, this presentation will
offer suggestions for limiting the amount of
personal data that is available to be seized and
divulged.
Cathy Nolan has an MBA in Business
Administration and 30 years’ experience as an
Information Analyst. When she became a
victim of identity fraud through the hacking of
her credit card information, she began
extensive investigation into credit card and
identity theft along with the many ways
personal information is being compromised.
Data Modeling Zone 2015
Page 11
Ashley Wilson is an Attorney at Law practicing
in Illinois and Wisconsin. She is a graduate of
the University of Illinois and received her law
degree at Marquette University. As an attorney
she became interested in the growing threat to
privacy and the lack of legal protection
afforded to individuals by the government and
our court system.
How to talk Data to Non-Data
People
Jill Camper, DST Systems, Inc.
Do you ever find it frustrating to talk about
data to those who just don’t seem to know how
to “talk data”? This session will give specific
tips on how to talk to data to non-data people
so that you can both be on the same page and
get your projects and ideas approved.
Jill has over 16 years of data management
experience in the financial services industry,
including high profile conversions. For the past
7 years she has been focused on data design in
both the mainframe and open systems world
including traditional RDMS design as well as
Data Warehouse, Star Schemas, and BI
oriented designs. She loves educating people on
things data and the importance of data in our
everyday lives.
Jill has her bachelor’s degree in Psychology
and her M.B.A and is also a Certified Data
Management Professional.
Data Modeler to Data Scientist - A
New Maturity Approach
Sanjay Shirude, ACCEL BI
We are experiencing the need for advanced
analytics pertaining to Big Data, Machine
Learning and the Internet of Things, which
carry large volumes of collected data. The role
of the Data Modeler is growing beyond its
traditional relationship with data to
constructing predictive models and
incorporating decision analysis in what closely
resembles the role of Data Scientist.
This session will provide you the framework to
grow from the traditional data modeler to
incorporating a combination of truly disruptive
drivers of business and social value. From this,
you will walk away with a new relationship to
the data ecosystem as a data professional.
Dr. Sanjay Shirude is the founder and CEO of
ACCEL BI. He has 25+ years experience as a
practitioner, consultant, and educator in the
field of Business Information Technology
Integration. He brings a simplified and
Data Modeling Zone 2015
Page 12
balanced perspective to the integration of
business and technology, encompassing both
business leadership and technical roles,
focusing on areas such as Data Management,
Data Governance, Enterprise Data
Warehousing, Master Data Management
(MDM), and applied Business Intelligence
architecture/analytics. His experience covers
Agile, scrum, and SDLC waterfall
methodologies; with roles as a program
manager, scrum master, product owner,
analyst, trainer, and mentor Sanjay serves as
VP of Education and Research for DAMA-I,
holds a Ph.D. in Information Management;
Masters in Statistics, Masters in Management
and PMP, CDMP, and CBIP Certifications. He
is a contributor and co-author to DMBOKII
and Enterprise Data Management Expert.
Natural vs. Surrogate
Primary Keys in Logical Data
Models
Gary Whitney, Microsoft
Learn why identifying the Primary Key of an
Entity by finding its Natural key(s) makes for
a much better Logical Data Model than using
a Surrogate Key.
Actual examples will be used to illustrate how
data modeling errors can be discovered when
using natural keys, and how business data
rules may not be able to be modeled if a
surrogate key in used.
Gary has over 40 years of experience designing
and building software systems for major
corporations like Lockheed, Hydraulic
Research Textron, Howard S. Wright
Construction, Paccar, Met Life, and currently
Microsoft. He is a Principal Information
Architect creating data models for a multi-year
project to support the business operations for
the largest division of Microsoft.
Facilitation and Training Skills
for Data Professionals
Artie Mahal, ASM Group Inc.
A Business Process describes How Work Gets
Done; Data describes the Facts needed to
execute that Process. One without the other
has little value in organizations. If Process is
the body then Data is the nervous system
which makes the body function. In the fast
pace of business change and frequent
reorganizations, the Data Analysts and the
Business and Process Analysts should expand
their value to the organizations by cross-
pollinating their understanding of how to
facilitate Data and Process requirements more
effectively.
The art and craft of enabling individuals and
groups to discuss issues and opportunities
around a shared objective; and develop agreed
strategies for a common direction is generally
referred to as Facilitation. Facilitation also
includes enabling people to learn through
transfer of knowledge and training in specific
skills by a subject matter expert. The person
or persons skilled in Facilitation are called
Facilitators. The approach for creating
agendas, conducting research, and facilitating
sessions to deliver planned outputs is referred
to as the Facilitation Process.
Using a case study of process improvement
and data design, this workshop will provide
hands-on experience in how Data
Professionals can leverage facilitation
techniques and tools in their craft to be more
Data Modeling Zone 2015
Page 13
effective in gathering requirements and
transferring knowledge to users, and other
data professionals.
What you will learn:
Adult Learning Theory and the
Learning Process
Multiple Intelligences Framework for
effective design and delivery of work
sessions
Session Leader qualities and
competencies; Session environment
setup
Methods and tools including the use of
engagers and energizers
Designing Agendas; Facilitation
Framework and how to self-develop for
success
For two decades Artie Mahal successfully led
mission-critical management support
programs as Effective Business Change
Regional Manager for North America and
Latin America at Mars International. While at
Mars International he developed and delivered
programs on Information Resource
Management, Business Change/Process
Management and Learning and Leadership
Development. His last role at the company was
to manage Training and Development
including the formation of Mars University in
North America. Artie has provided services on
four continents and has been a speaker at
national and international professional forums
including Seton Hall University’s MBA
program and Rutgers University Business
College. Artie Mahal is a Senior Consultant
with BPTrends Associates since 2006. He is
also the founder of ASM Group and is a
Business Process Management (BPM)
consultant and trainer, developing and
delivering BPM professional services privately
to corporations and publicly through Boston
University’s Corporate Education Center.
Artie is the author of two books: 1) How Work
Gets Done, Business Process Management,
Basics and Beyond, and 2) Facilitator’s and
Trainer’s Toolkit. Artie is an accomplished
facilitator and has facilitated workshops
internationally in North America, Europe and
Asia Pacific regions. His workshops are highly
interactive and use state of the art methods
such as a “brain compatible learning method.”
He has facilitated workshops for Strategic
Planning, Business Process Improvement,
Ideation, After Action Reviews and Project
Management. Artie is a certified trainer in
Business Process Management (BPM), Human
Change Management, Diversity and Project
Management.
Semantic Structure Analysis and
Dimensional Modeling
Dave Wells, Infocentric
Well-designed dimensional data provides
business capability for iterative and
interactive analysis – getting quick answers to
business questions as they arise. Most
dimensional modelers begin with a set of
known business questions, and then develop
star-schema designs to answer those
questions. This approach, while theoretically
sound, has some deficiencies in practice.
Answering today’s known questions is a good
beginning. But the best and most sustainable
dimensional data structures are designed to
answer unanticipated questions well into the
future.
The challenge, of course, is how to design data
structures to answer unknown questions that
may occur at some future time. Direct
translation of questions into schema doesn’t
Data Modeling Zone 2015
Page 14
get the job done. This tutorial describes an
alternative approach that enriches
dimensional models by examining the
semantic structure of a representative set of
questions. Translating semantics instead of
specific questions produces more robust
dimensional data models.
Attendees will learn:
Techniques to parse business
questions and discover the underlying
semantic structure of those questions.
The skills needed to map business
question semantics as facts and
qualifiers.
How to translate facts and qualifiers
into logical dimensional models.
Tips and techniques to extend and
enrich dimensional models, thus
increasing their long-term value and
usefulness.
Dave Wells is actively involved in information
management, business management, and the
intersection of the two. As a consultant he
provides strategic guidance and mentoring for
Business Intelligence, Performance
Management, and Business Analytics
programs - the areas where business
effectiveness, efficiency, and agility are driven.
As an educator he plans curriculum, develops
courses, and teaches for organizations such as
TDWI and eLearningCurve. On a personal
level, Dave is a continuous learner, currently
fascinated with understanding how we think,
both individually and organizationally. He
studies and practices systems thinking, critical
thinking, lateral thinking, and divergent
thinking, and he now aspires to develop deep
understanding and appreciation for the art
and science of innovation.
Data Governance for Data
Modelers Workshop
Deborah Henderson, Broadstreet Data
Data Governance may seem like a faraway
topic to data modelers: too strategic to have
much direct impact on a modeler’s work. This
seminar will connect the strategy directly to
the modeler’s work and show the benefits of
data governance in a systemic way as
modelers always knew systems development
really works.
In this workshop we will look at:
Data Governance as a framework
What modelers are already doing - and
the connection to data governance
Modeling in context - architecture,
design, operations
What’s important and when -
principles based modeling
We will complete ‘hands on’ exercises:
Estimating templates – from model
discovery to model creation
Reporting on modeling activity and
governance scorecards
Organizing for Quality, and modelers
place in this
Data Modeling Zone 2015
Page 15
Deborah Henderson, B.Sc., MLS, PMP, CDMP
is the Data Governance Practice Manager for
Broadstreet Data in Toronto and teaches data
governance fundamentals classes publicly and
privately. She is Program Director for the
DAMA-DMBOK (Data Management Body of
Knowledge), a global effort going on since 2005.
With over 25 years in data management, she
consults in data governance in the energy,
capital markets, heath and automotive sectors.
About MongoDB
Open Software Integrators
This is an introductory training on MongoDB.
We'll explain what it is, when you should use a
document database like MongoDB.
You'll learn:
how to install MongoDB
how to insert data into MongoDB
how to query data from MongoDB
basic schema design in MongoDB
clustering and network topologies
monitoring with Mongo Management
Service
The training will be conducted by Open
Software Integrators, a Big Data consulting
and services company specializing in Hadoop,
Cassandra, MongoDB and other NoSQL
technologies. OSI focuses on executive strategy,
initial install, design and implementation;
helping companies transition from legacy
systems into a data-driven organization.
UML in Depth
Norman Daoust, Daoust Associates
An in-depth look at those UML diagram types
of most importance to data professionals: use
case, activity, class, object, state machine,
timing, sequence, communication and package.
The presentation includes best practice
guidelines and tips for each of these diagram
types. They will be illustrated with examples
from a case study. We will briefly illustrate
how to model services and their operations for
Service Oriented Architecture (SOA).
Note: This is not an introductory
session. Attendees should be familiar
at least with use case and class
diagrams.
Attendees will learn:
for each of the listed diagram types:
modeling tips, diagram layout tips,
naming guidelines
the relationships between the different
diagram types
how these diagram types can assist
data professionals in their work
Norman Daoust founded his consulting
company Daoust Associates,
www.DaoustAssociates.com in 2001. His
clients have included the Centers for Disease
Control and Prevention (CDC), the Veteran’s
Health Administration, the Canadian Institute
for Health Information, a Fortune 500 software
company, and several start-ups. He has been
an active contributor to the healthcare industry
standard data model, the Health Level Seven
(HL7) Reference Information Model (RIM)
since its inception. He enjoys introducing data
and process modeling concepts to the business
analysis community and conducting business
analysis training courses. Norman’s book,
“UML Requirements Modeling for Business
Analysts” explains how to adapt UML for
analysis purposes.
Data Modeling Zone 2015
Page 16
The Journey to an Enterprise
Data Model
Sherri Adame, Premier Farnell
Metadata management is a reasonable place
to begin if your organization is not practicing
data or information governance formally. If
data requirements are always a last thought
or rarely considered, Metadata can start your
organization’s steps to developing a data
modeling practice. This presentation is about
Premier Farnell’s Metadata management
journey to maintaining an enterprise data
model.
Sherri Adame is passionate and evangelizes all
things data. She is well respected and speaks
frequently on data governance and various
data management topics. One of the Top 25
information managers in 2011 by Information
Week, she constantly challenges status quo. She
has an engaging personality and style that
makes the dullest of topics engaging and
exciting for all levels of an organization.
Evaluating Data Modeling Tools?
Helping You to Decide
George McGeachie, Metadata Matters
The evaluation and selection of a data
modeling tool for your organization can be a
daunting task. Not only are there numerous
technical criteria and requirements, but there
are often political, organizational and cultural
challenges as well. The place of data modeling
in the organization, the types of models to be
created (Enterprise, Conceptual, Logical,
Physical), and not forgetting any historical
considerations e.g., a database administrator
may have a “favorite” tool that he has used in
the past. The corporate standard might dictate
yet another technology, which may not align
technically with your particular project. There
may be a push to use technical checklists or
formal RFPs that may not apply to your
individual needs. Information from vendors
may be flavored with their own particular
strengths, which may not be relevant to your
requirements. So how do you sort through all
of these conflicting messages to choose the tool
that is right for you and your culture?
It is imperative that your organizations’
requirements be fully understood, documented
and prioritized, and that the team responsible
in the decision process clearly highlights the
implications of requirements before the
evaluation process gets too far along, and that
the team be well versed in diplomacy and
stakeholder management.
This presentation will describe the factors to
consider – technical, organizational and
cultural when evaluating a data modeling tool
and share a simple 10 step process that
anyone can adopt.
To avoid frustrations and streamline the
decision making process, leverage this 10-step
guide to evaluate each data modeling solution
best suited for your unique business needs. It
will enable the evaluation team to make a
strategic and sound decision, and maybe even
make you the data modeling evaluation hero.
Data Modeling Zone 2015
Page 17
George McGeachie has spent his working life
creating, managing and linking data models,
process models, and others. He encourages
organizations to connect and utilize their
metadata islands, to recognize the wealth of
information contained in their data models, to
recognize that the creation of data models must
form part of an integrated approach to
improving their business, and therefore
recognize the importance of avoiding the
creation of islands of metadata in the first
place.
Crossing the Unstructured Barrier
Bill Inmon, Forest Rim Technologies
The most exciting advances in technology have
been made in the arena of incorporating
textual data into the corporate decision
making process. This presentation addresses
the reality of textual exploitation of medical
records, call center information, restaurant
and hotel feedback analysis, and other arenas
where text is found.
Bill Inmon – the “father of data warehouse” –
has written 53 books published in nine
languages. Bill’s latest adventure is the
building of technology known as textual
disambiguation – technology that reads raw
text in a narrative format and allows the text to
be placed in a conventional data base so that it
can be analyzed by standard analytical
technology, thereby creating unique business
value for Big Data/unstructured data. Bill
was named by ComputerWorld as one of the ten
most influential people in the history of the
computer profession. For more information
about textual disambiguation refer to
www.forestrimtech.com.
Modern Data Architecture? Or
Fresh Messaging for Familiar
Concepts?
Eddie Sayer, Teradata
The data and analytics market is vastly
different than what existed only two to three
years ago. The number of ‘big data’ technology
alternatives is staggering. The momentum of
open-source Hadoop is undeniable. Numerous
organizations are confounded by how to
proceed.
Throughout this evolution, however, many
fundamental data architecture requirements
have remained remarkably stable.
Organizations still must ingest and process
new sources of data. Organizations still
perform cross-functional analysis to glean
insights. Organizations still optimize data for
access and carry out rapid experiments. Many
data architecture constructs are as applicable
today as they were 10-15 years ago. In fact,
some have been extended and rebranded using
terms such as ‘Modern Data Architecture’ and
‘Data Lake’.
This presentation examines the emergence of
reference information architectures gleaned
Data Modeling Zone 2015
Page 18
from numerous client projects to help make
sense of the confusing marketplace, and in the
process, paint a vision for future data and
analytic solutions.
For over two decades, Eddie has been helping
large organizations gain sustainable
competitive advantage with data. He has
worked at length in various roles including
enterprise data management, enterprise
architecture, data modeling and data
warehousing. Eddie joined Teradata in 2008
and has since conducted numerous
engagements with clients, helping to set
direction for data management. Prior to
joining Teradata, Eddie was a Data Architect
at CheckFree Corporation, the largest provider
of e-billing and electronic bill payment in the
US. Previously, Eddie held similar positions at
Macys Systems and Technology and Inacom
Corporation. Eddie currently serves as the VP
of Programs for the Georgia chapter of Data
Management International (DAMA) and is a
frequent speaker at industry events.
Advanced SQL Queries
Michael Blaha, Modelsoft Consulting
SQL is underutilized in software development.
Way too many developers think there is
nothing to SQL but store and retrieve. For
example, many programmers use a layer to
hide a database with the layer storing and
retrieving data a record at a time.
This session will explain the motivations for
using advanced SQL.
We will include multiple examples of advanced
SQL queries. The queries will illustrate some
of the possibilities. We expect the queries to be
helpful templates for attendees to use in their
own work.
The session will present business situations
where we have used advanced SQL queries.
For example, we often write meta-SQL, SQL
code that generates SQL code. For one project
we read an ERwin data dictionary and with
meta-SQL generated database comment
commands to load comments for each table
and column. As another example, we wrote
SQL queries to look for time gaps and overlaps
in staged data for a data warehouse.
Advanced SQL queries fit well into application
development. One technique is to encapsulate
advanced SQL logic within stored procedures
that provide an API to application
programmers.
Michael Blaha is a consultant and trainer who
specializes in conceiving, architecting,
modeling, designing, and tuning databases. He
has worked with dozens of organizations
around the world. Blaha has authored seven
U.S. patents, seven books, and many articles.
His most recent book is “UML Database
Modeling Workbook”. He received his doctorate
from Washington University in St. Louis and is
an alumnus of GE Global Research in
Schenectady, New York.
Data Modeling Zone 2015
Page 19
Accounting: The Essence
David Hay, Essential Strategies, Inc.
Data modelers typically are not big fans of
accounting. When they took the introductory
class in college, they found it to be about lots of
arithmetic procedures for dealing with
seemingly obscure topics─but somehow the
underlying structure and nature of accounting
never quite got across. (OK, it’s true. “They” in
this case is Dave Hay, the author of this
presentation.)
In preparing his first book, Data Model
Patterns: Conventions of Thought, Dave Hay
finally figured it out: Accounting is in fact a
business modeling language itself--one that
predates data modeling by some 400 years. It
has a rigorous structure, which is, as modeling
languages go, remarkably clever.
What this means is that, to model accounting
is not like modeling any other subject area.
This is in fact a meta-model. It has links to the
entire model of an enterprise’s data─but it is
fundamentally at right angles to it.
This presentation will describe Dave Hay’s
data model of accounting (well, double-entry
bookkeeping), and will include some recent
insights into how computed fields can enforce
the underlying rules that must be followed.
People with an accounting background may
get new insights into the nature of their field,
while those who’ve never been able to
understand accounting will have the
opportunity, for the first time, to get a clear
understanding of exactly what it is about.
In the Information Technology Industry since it
was called “Data Processing”, Dave Hay has
been producing data models to support
strategic and requirements planning for nearly
thirty years. He has worked in a variety of
industries, including, among others, banking,
clinical pharmaceutical research, and all
aspects of oil production and processing.
For over 22 years he has been President of
Essential Strategies, Inc., a consulting firm
dedicated to helping clients define corporate
information architecture, identify
requirements, and plan strategies for the
implementation of new systems. Dave is the
author of the original data model patterns
book, Data Model Patterns: Conventions of
Thought, as well as books on requirements
analysis and metadata after that.
In 2011, he published the successor book,
Enterprise Model Patterns: Describing the
World. This is a comprehensive set of patterns
addressing enterprise models from several
levels of abstraction. Since he took the unusual
step of producing model patterns in UML, a
follow-on book, UML and Data Modeling: A
Reconciliation was published recently. He
has spoken at numerous international and
local DAMA, user group, and other conferences.
Data Modeling Zone 2015
Page 20
Using ISO 8000 to Measure the
Quality of Master Data
Peter R. Benson, Electronic Commerce Code
Management Association (ECCMA)
This one hour presentation focuses on ISO
8000, the international standard for quality
data, and how the standard is used in
measuring the quality of master data. By the
end of this presentation you will be able to use
ISO 8000 to objectively measure the quality of
master data.
Mr. Peter Richard Benson is the Founding and
Executive Director of the Electronic Commerce
Code Management Association (ECCMA). The
international association was founded in 1999
to develop and promote the implementation of
co-operative solutions for the unambiguous
exchange of information.
Peter has enjoyed a long career in data driven
systems starting with early work on debugging
Vulcan the precursor of what became dBase,
one of the early relational database
applications designed for the personal
computer market. Peter went on to design
WordStar Messenger, one of the very first
commercial electronic mail software
applications which included automated high to
low bit conversion to allow eight bit word-
processing formatting codes to pass through the
early seven bit UNIX email systems. Peter
received a British patent in 1992 covering the
use of automated email to update distributed
databases. From 1994 to 1998 Peter chaired
the ANSI committee responsible for the
development of EDI standards for product data
(ASC X12E). Peter was responsible for the
design; development and global promotion of
the UNSPSC as an international commodity
classification for spend analysis and
procurement. Most recently, in pursuit of a
faster, better and lower cost method for
obtaining and validating master data, Peter
designed and oversaw the development of the
eOTD, eDRR and eGOR as open registries of
terminology, data requirements and
organizations mirrored on the NATO
cataloging system. Peter is also the project
leader for ISO 8000 (data quality) and ISO
22745 (open technical dictionaries).
From Operational to Analytics: An
Exploration of Data Model Designs
for Software Business
Applications
Ralph Hollinshead and Goran Stanisic,
SAS Institute
Physical database design decisions are largely
dependent on the expected workload but can
also vary widely depending on the database
technology being used. In this session, we will
cover some real world examples of SAS
Industry solution data models deployed and
designed for different database targets. These
data models include areas such as customer
intelligence, sensor data for machine
maintenance, and banking risk data. In
addition to exploring the general design
decisions and tradeoffs made, we will cover
some of advanced techniques such as the use
of JSON datatypes and partitioning strategies.
We will also look in to design decisions for
Hadoop Hive tables as well as NoSQL
databases such as Hbase. Attendees of this
session will gain a good real world overview of
advanced physical data model design
techniques that they may consider useful for
their own data model work.
Data Modeling Zone 2015
Page 21
Ralph Hollinshead is an experienced leader in
data modeling database design and
development in Financial Services,
Government, Life Sciences, Retail, and
Communications. He has domain knowledge as
well as database expertise in SAS, Oracle,
Teradata, Hadoop and other major relational
databases.
His proven experience includes team
development and management in database
design for both operational and business
intelligence usage as well as developing for Big
Data implementations in Hadoop
environments. He is the development manager
for SAS Banking industry reference data model
as well as oversight responsibility for the data
architecture for SAS industry solutions. He is a
leader in setting and implementing corporate
standards for data architecture.
I have over 25 years of professional IT
experience in the healthcare, life sciences,
insurance, and financial industries. I possess
hands-on experience across the full data
warehouse development lifecycle, and have
managed and lead teams of data architects.
In last 10 years I have held roles ranging from
a data warehouse Solution Architect, Principal
Data Integration Architect to an Enterprise
Data Architect. I offer both breadth and depth
of knowledge in the areas of data warehouse
architecture, as well as development and
adoption of data modeling standards and best
practices.
Understaffed with data modelers?
How to train developers as
apprentice data model reviewers
Sally Greenwood, TDS Telecom
Are you envious of IT shops with a large team
of expert data modelers? Are you struggling to
“make do” with one or two, and agonizing over
the projects you have to let go into production
with problem-laden data designs?
Learn to leverage the small number of expert
data modelers in your IT shop by creating a
Logical Data Model Review program staffed by
apprentice data modelers from your
development teams.
This session provides a practical, step-by-step
plan to start a successful logical data model
review program that leverages your scarce
expert data modeling resources.
Learn how to:
Select a group of apprentice data
modelers from the development staff
who can review the less-complicated
data models for projects
Train and mentor the apprentices
Set up a simple process for project
teams to request a reviewer
Define guidelines for assigning
reviewers, and which reviews should
be reserved for one of the Data
Architects
Document the results of the review
Data Modeling Zone 2015
Page 22
Partner with the DBA staff to ensure
only reviewed designs are
implemented.
Sally Greenwood, CBIP, is a Data Architect
with TDS Telecom in Madison, WI. She has 25
years of experience designing and reviewing
logical data models for operational and data
warehouse systems for a wide variety of
businesses and organizations, including retail,
manufacturing, health care, and
telecommunications. She has been writing and
teaching Logical Data Modeling, Requirements
Analysis, Complex Decision Making and
Professional Facilitation courses since 1985.
Her current Logical Data Model Review
Program is entering its sixth year and has
reviewed over 300 projects. Some of her greatest
professional satisfaction comes from
identifying and growing talent in others.
MapReduce vs. OLAP – Do These Two
Worlds Collide?
Dave Wells, Infocentric
The emergence and popularity of MapReduce
brings a new debate to the world of data
management. Are MapReduce and OLAP
compatible, competitive, or conflicting? A
simple web search of “MapReduce and OLAP”
yields interesting results:
OLAP is Becoming Obsolete
Can OLAP be done in BigTable?
MapReduce or Data Warehouse?
When the noise quiets and the dust settles I
believe we’ll find that both OLAP and
MapReduce are alive and well. Each is
designed for a different purpose – OLAP for
interactive analysis of data and MapReduce
for making sense of big data. This presentation
explores how big data – specifically key-value
pair data from MapReduce – works to enrich
and extend dimensions and to increase the
value that we can deliver with OLAP
databases.
Dave Wells is actively involved in information
management, business management, and the
intersection of the two. As a consultant he
provides strategic guidance and mentoring for
Business Intelligence, Performance
Management, and Business Analytics
programs - the areas where business
effectiveness, efficiency, and agility are driven.
As an educator he plans curriculum, develops
courses, and teaches for organizations such as
TDWI and eLearningCurve. On a personal
level, Dave is a continuous learner, currently
fascinated with understanding how we think,
both individually and organizationally. He
studies and practices systems thinking, critical
thinking, lateral thinking, and divergent
thinking, and he now aspires to develop deep
understanding and appreciation for the art
and science of innovation.
Data Vault Fundamentals and
Workshop
Hans Hultgren, Genesee Academy
Data Modeling Zone 2015
Page 23
This Data Vault Workshop is a highly
interactive session. Attendees will first receive
a data vault introduction (and modeling
primer) and then will participate in an active
modeling exercise.
The presentation will cover the drivers for
choosing data vault modeling, the core
fundamentals of the data vault modeling
approach, and several practical insights for
applying data vault modeling in your
organization. There will be time for questions
and discussion concerning topics of interest
from the audience. The modeling exercise is a
small group interactive modeling lab where
attendees will review a business case and work
together to create an effective data vault
model. Groups will then present their models
to the larger group and field questions and
comments from fellow attendees.
President at Genesee Academy and a Principal
at Top Of Minds AB. Data Warehousing and
Business Intelligence educator, author,
speaker, and advisor.
Currently working on Business Intelligence
and Enterprise Data Warehousing (EDW) with
a focus on Ensemble Modeling and Data Vault.
Primarily in Stockholm, Amsterdam, Denver,
Sydney and NYC.
Published data modeling book “Modeling the
Agile Data Warehouse with Data Vault” which
is available on Amazon websites in both print
and Kindle e-reader versions.
Specialties: Information Management and
Modeling, Ensemble Modeling, Data Vault
Modeling, Agile Data Warehousing, Education,
e-Learning, Entrepreneurship and Business
Development.
Corporate Dictionary Workshop
Peter R. Benson, Electronic Commerce Code
Management Association (ECCMA)
This is a workshop on the creation,
management and use of a corporate dictionary.
A dictionary includes metadata but also
classes and controlled values, basically all the
concepts used in identifying and describing
individuals, organizations, goods or services.
The workshop explores ISO 22545 Part 10:
Dictionary representation and Part 11:
Guidelines for the formulation of terminology.
The workshop looks at how the corporate
dictionary is used in stating requirements for
master data. By the end of this session you
will be able create an ISO 22745 corporate
dictionary and you will be able to use a
corporate dictionary to state ISO 22745
requirements for master data.
Mr. Peter Richard Benson is the Founding and
Executive Director of the Electronic Commerce
Code Management Association (ECCMA). The
international association was founded in 1999
to develop and promote the implementation of
co-operative solutions for the unambiguous
exchange of information.
Data Modeling Zone 2015
Page 24
Peter has enjoyed a long career in data driven
systems starting with early work on debugging
Vulcan the precursor of what became dBase,
one of the early relational database
applications designed for the personal
computer market. Peter went on to design
WordStar Messenger, one of the very first
commercial electronic mail software
applications which included automated high to
low bit conversion to allow eight bit word-
processing formatting codes to pass through the
early seven bit UNIX email systems. Peter
received a British patent in 1992 covering the
use of automated email to update distributed
databases. From 1994 to 1998 Peter chaired
the ANSI committee responsible for the
development of EDI standards for product data
(ASC X12E). Peter was responsible for the
design; development and global promotion of
the UNSPSC as an international commodity
classification for spend analysis and
procurement. Most recently, in pursuit of a
faster, better and lower cost method for
obtaining and validating master data, Peter
designed and oversaw the development of the
eOTD, eDRR and eGOR as open registries of
terminology, data requirements and
organizations mirrored on the NATO
cataloging system. Peter is also the project
leader for ISO 8000 (data quality) and ISO
22745 (open technical dictionaries).
About Hadoop
Open Software Integrators
We give you a basic introduction to Hadoop,
What is it, how can you use it and why.
This includes an introduction to:
The Hadoop Ecosystem
HDFS: Hadoop Distributed Filesystem
YARN
Name Node
Map Reduce
Hive
Basic Installation
Basic operation
Reading/Writing/Navigating HDFS
Getting Data into Hive
Getting Data out of Hive
The training will be conducted by Open
Software Integrators, a Big Data consulting
and services company specializing in Hadoop,
Cassandra, MongoDB and other NoSQL
technologies. OSI focuses on executive strategy,
initial install, design and implementation;
helping companies transition from legacy
systems into a data-driven organization.
Data Modeling for Sustainable
Systems
Graham Witt, Ajilon
Systems, like any other high-value
acquisitions, should continue to work after
the warranty period, but far too often fail
to work completely as intended. Data
modelers can enhance the quality and
lifespan of a system if they take a broader
view than one which simply converts
information storage and retrieval
requirements into a data model. This
workshop looks at how to add value to the
data modeler’s contribution to system
development or package customization,
including:
identifying real-world complexity and
its implications for the choice of data
structures
the role of generalization in managing
such complexity
development of a common vocabulary
for real-world concepts, attributes,
relationships and processes, and use of
that vocabulary in system artifacts
and user interfaces
recognizing how the data is to be used
in Business Intelligence and the
implications of that use for choice of
data structures
analysis of how changes in the real
world are to be recorded in the chosen
Data Modeling Zone 2015
Page 25
data structures, with implications for
process and user interface design
ensuring test plans cover data update
side-effects adequately
managing data model change
effectively.
The workshop includes a series of case
studies drawn from the speaker’s
experience dealing with these issues.
Graham has over 30 years of experience in
delivering effective data solutions to the
government, transportation, finance and utility
sectors. He has specialist expertise in business
requirements, architectures, information
management, user interface design, data
modeling, database design, data quality and
business rules. He has spoken at conferences in
Australia, the US and Europe and delivered
data modeling and business rules training in
Australia, Canada and the US. He has written
two textbooks published by Morgan Kaufmann:
“Data Modeling Essentials” (with Graeme
Simsion) and “Writing Effective Business
Rules”, and writes monthly articles for the
Business Rule Community
(www.brcommunity.com).
Advanced Data Modeling
Challenges Workshop
Steve Hoberman, Steve Hoberman &
Associates, LLC
After you are comfortable with data modeling
terminology and have built a number of data
models, often the way to continuously sharpen
your skills is to take on more challenging
assignments. Join us for a half day of tackling
real world data modeling scenarios. We will
complete at least ten challenges covering these
four areas:
NoSQL data modeling
Agile and data modeling
Abstraction
Advanced relational and dimensional
modeling
Join us as in groups as we solve and discuss a
set of model scenarios.
Steve Hoberman taught his first data modeling
class in 1992 and has trained more than
10,000 people since then, spanning every
continent except Africa and Antarctica. Steve is
known for his entertaining and interactive
teaching style (watch out for flying candy!),
and organizations around the globe have
brought Steve in to teach his Data Modeling
Master Class, which is recognized as the most
comprehensive data modeling course in the
industry. Steve is the author of seven books on
data modeling, including the bestseller Data
Modeling Made Simple. His latest book, Data
Modeling for MongoDB, presents a streamlined
approach to data modeling for NoSQL
solutions. One of Steve’s frequent data
modeling consulting assignments is to review
data models using his Data Model Scorecard®
technique. He is the founder of the Design
Challenges group, recipient of the 2012 Data
Administration Management Association
(DAMA) International Professional
Achievement Award, and highest rated
presenter at Enterprise Data World 2014.
Data Modeling Zone 2015
Page 26
Competency Assessment for the
Data Professional
Artie Mahal, ASM Group Inc.
Professional Development does not happen by
accident. It requires an awareness of where
you are today in terms of your skills and
competencies; where you are headed and how
you would get there through proven methods
of development.
In the fast pace of the dynamic and global
work environment, one must be prepared to
embrace any change at any time: in what they
do, how they do it, and where they do it.
Employees are expected to continuously
demonstrate their value to the organization
through measurable performance. Being
stagnant in a role or in a given set of skills
therefore, is not an option. Review your
transferable and versatile skills: technical,
softer or organizational skills that can be
applied in a wide variety of positions across
the organization. They are your asset
inventory. Plan to enhance this asset through
a methodical approach to your professional
development.
This presentation will provide you with the
basic concepts of assessing your current skills,
principles of development, and planning tools
for enhancing your developmental journey.
What you will learn:
Understanding of Competency and
Skills, and their role in jour jobs
Self-Assessment tool: VADI (Variety,
Adversity, Diversity and Intensity)
Development Learning Framework:
70/20/10
Self-Development Mantra: How to
Write. Speak, Learn, Think, Present
and Network
Creating and maintaining an ongoing
developmental plan
For two decades Artie Mahal successfully led
mission-critical management support
programs as Effective Business Change
Regional Manager for North America and
Latin America at Mars International. While at
Mars International he developed and delivered
programs on Information Resource
Management, Business Change/Process
Management and Learning and Leadership
Development. His last role at the company was
to manage Training and Development
including the formation of Mars University in
North America. Artie has provided services on
four continents and has been a speaker at
national and international professional forums
including Seton Hall University’s MBA
program and Rutgers University Business
College. Artie Mahal is a Senior Consultant
with BPTrends Associates since 2006. He is
also the founder of ASM Group and is a
Business Process Management (BPM)
consultant and trainer, developing and
delivering BPM professional services privately
to corporations and publicly through Boston
University’s Corporate Education Center.
Artie is the author of two books: 1) How Work
Gets Done, Business Process Management,
Basics and Beyond, and 2) Facilitator’s and
Trainer’s Toolkit. Artie is an accomplished
facilitator and has facilitated workshops
internationally in North America, Europe and
Asia Pacific regions. His workshops are highly
interactive and use state of the art methods
such as a “brain compatible learning method.”
He has facilitated workshops for Strategic
Planning, Business Process Improvement,
Ideation, After Action Reviews and Project
Management. Artie is a certified trainer in
Data Modeling Zone 2015
Page 27
Business Process Management (BPM), Human
Change Management, Diversity and Project
Management.
Six Habitual Architecture
Mistakes and How to Avoid Them
Eddie Sayer, Teradata
Is your architecture characterized by excessive
costs, supportability issues and business
dissatisfaction?
This session examines six habitual
architecture mistakes observed on numerous
client projects: employing a technology-driven
approach; allowing the architecture to
accidentally evolve; ignoring organizational
constraints; deviating from fundamental
principles; reinventing the wheel for common
design problems, straying from engineering
discipline. The presenter will explore four
architecture components that are essential for
avoiding the mistakes: a structured
architecture framework; architecture
principles and advocated positions; design
patterns and implementation alternatives;
reference architectures.
The session concludes with proven
recommendations for maturing architecture
capabilities. You will leave the session not only
better educated on architecture, but armed
with ideas for architecting high-quality
solutions in your own environment.
For over two decades, Eddie has been helping
large organizations gain sustainable
competitive advantage with data. He has
worked at length in various roles including
enterprise data management, enterprise
architecture, data modeling and data
warehousing. Eddie joined Teradata in 2008
and has since conducted numerous
engagements with clients, helping to set
direction for data management. Prior to
joining Teradata, Eddie was a Data Architect
at CheckFree Corporation, the largest provider
of e-billing and electronic bill payment in the
US. Previously, Eddie held similar positions at
Macys Systems and Technology and Inacom
Corporation. Eddie currently serves as the VP
of Programs for the Georgia chapter of Data
Management International (DAMA) and is a
frequent speaker at industry events.
Conducting Data
Modeling Project Meetings
Gordon Everest, University of Minnesota
We will cover best practices in working with
business subject matter experts to gather
information requirements which can lead to
the design of databases to support their
applications. Learn different methods
(extended series of meetings, accelerated JAD
session, interviews), when and how best to use
them, and advanced preparations. This session
is based on actual experiences and a
comparative research project.
Data Modeling Zone 2015
Page 28
Dr. Everest is Professor Emeritus of MIS and
Database in the Carlson School of
Management at the University of Minnesota.
With early “retirement”, he continues to teach
as an adjunct. His Ph.D. dissertation at the
Univ of Pennsylvania Wharton School entitled
“Managing Corporate Data Resources” became
the text from McGraw-Hill, “Database
Management: Objectives, System Functions,
and Administration” in 1986 and remained in
print until 2002!
Gordon has been teaching all about databases,
data modeling, database management systems,
database administration, and data
warehousing since he joined the University in
1970. Students learn the theory of databases,
gain practical experience with real data
modeling projects, and with hands-on use of
data modeling tools and DBMSs. Besides
teaching about databases, he has helped many
organizations and government agencies design
their databases. His approach transfers
expertise to professional data architects within
those organizations by having them participate
in and observe the conduct of database design
project meetings with the subject matter
experts. He is a frequent speaker at
professional organizations such as DAMA.
Case Study: Roadmap to an
Enterprise Logical Data Model
Missy Wittmann, American Family
How many times have you attempted to create
an Enterprise Logical Data Model and had to
put the work aside to work on a project
model? Let’s get together to discuss an
approach that is a win – win for everyone.
Where do we start?
Who needs to be involved?
Socializing the efforts
Get everyone on the same page
Process/Process/Process
.
Missy Wittmann is an Information Modeling
Engineer Specialist at American Family
Insurance. Missy has worked in the data
modeling field for over fifteen years in various
roles. She started out as a business partner on
a project that was doing some data modeling
and enjoyed the process so much that she
changed career paths. Missy has facilitated
projects for Business Modeling, Logical and
Physical Data Modeling. Data Modeling is an
exciting place to be in the world of technology.
No matter what technology is being used to get
the end result, we always need our data!
Implementing Data Vault in a
Columnar Database
Petr Olmer, GoodData
This session presents some insights into what
it takes to implement a Data Vault data
warehousing solution in a columnar database,
Vertica specifically. Columnar storage
strategies and available analytical functions
imply different reasoning behind the design of
satellites and links.
You will learn the differences of Data Vault
implementation in classical row-oriented and
columnar databases. Lessons learned from
real implementations will be shared. Basic
knowledge of Data Vault architecture is
expected.
Data Modeling Zone 2015
Page 29
Petr Olmer studied multi agent systems,
artificial intelligence, and declarative
programming. He saw big data for the first
time while working at Computer Centre at
CERN, The European Laboratory for Particle
Physics. Today he works at GoodData,
building tools and defining architecture and
methodology for BI implementations.
FoCuSeD Data Modeling -
facilitated data modeling
Gary Rush, MGR Consulting, Inc.
This interactive session is geared to enable
data analysts to facilitate Data Modeling
sessions with business clients. Gary will show
you how to facilitate, step-by-step, a data
modeling workshop and what skills or tools
are needed at each step. It is a brief summary
of Gary’s book, FoCuSeD Data Modeling Made
Easy. Attendees will learn:
How to build a Data Model with
business clients who have never seen a
data model.
How to use the modeling session to
clarify and re-engineer the business.
How Active Listening affects the model
and how to effectively listen to your
clients.
How to harness the collective
knowledge of your clients to build a
data model that they embrace. How to
make the model truly their model.
Gary Rush, IAF CPF, Founder and President
of MGR Consulting, Inc., attended the U.S.
Naval Academy and is a former Chair of the
International Association of Facilitators (IAF).
He is a recognized leader in the field of
Facilitation and Facilitator training,
managing projects since 1980, facilitating since
1983, and providing Facilitator training since
1985; and continues to be the leading edge in
the industry by continuing as a practicing
Facilitator.
As a Facilitator Trainer, he teaches FoCuSeD.
He teaches specific “how to” with an
understanding of the “why” to perform as an
effective Facilitator; he provides detailed
Facilitator and process tools, enhances his
training through effective learning activities,
and, as an IAF CPF Assessor, he covers the
IAF Core Facilitator Competencies and what
students need to do to achieve them. As a
Facilitator, he improves client business
performance through effective application of
exceptional facilitation processes and he is
highly skilled at engaging participants and
guiding them to consensus.
Gary has written numerous “how to” books,
including the FoCuSeD Facilitator Guide – a
comprehensive reference manual sharing his
step-by-step process so that students can
replicate his practices. His alumni often tell us
how much Gary has changed their lives.
Data Modeling Zone 2015
Page 30
Writing effective business rules - a
practical method
Graham Witt, Ajilon
This workshop teaches modelers how to
collaborate with business stakeholders to
develop well-formed and consistent statements
of an organization’s business rules to enhance
business effectiveness, prepare for system
development, and obtain meaningful data from
customers and/or other organizations. This
workshop covers the following topics:
What is a business rule?
Why do organizations have rules?
What is the best way to document a
rule?
Why do we need to document rules?
Types of rule
An end-to-end rule development
process
Producing well-formed rule statements
Rule statement quality assurance.
Graham has over 30 years of experience in
delivering effective data solutions to the
government, transportation, finance and utility
sectors. He has specialist expertise in business
requirements, architectures, information
management, user interface design, data
modeling, database design, data quality and
business rules. He has spoken at conferences in
Australia, the US and Europe and delivered
data modeling and business rules training in
Australia, Canada and the US. He has written
two textbooks published by Morgan Kaufmann:
“Data Modeling Essentials” (with Graeme
Simsion) and “Writing Effective Business
Rules”, and writes monthly articles for the
Business Rule Community
(www.brcommunity.com).
The Data Modeler’s Road to the
Certified Data Management
Professional (CDMP)
Patricia Cupoli, CCP, CDMP, CBIP,
DAMA International
For the data modeler, the CDMP is a
designation identifying they have
demonstrated a standard level of knowledge
and experience within Data Management and
specifically Data Modeling. The CDMP is
offered through DAMA International and the
ICCP. During the first half of this workshop,
we will be discussing:
CDMP certification process
Topics, concepts and terms of the Data
Modeling exam
Preview of the IS Core and Data
Management Core exams
Workshop attendees will have the ability to
take the Data Modeling exam during the
second half of this workshop. The exam cost is
$285. As a special DMZ feature, you pay only
if you pass (passing is 50% or better).
Bring your own Windows-based laptops – the
USB drive has to be unencrypted as the exam
runs off this drive. Your exam results and
unofficial performance profile can be viewed
immediately.
Data Modeling Zone 2015
Page 31
Patricia Cupoli, CCP, CDMP, CBIP,
TOGAF®9 Certified, is a course developer and
teaches online Data Management at Edmonds
Community College and ICCP CDMP courses.
She has an extensive background in the areas
of Data Governance, Data Warehousing,
Metadata Solutions and Repositories,
Enterprise Modeling (business process and
data) for Business Re-engineering, Project
Management, IT Strategic Planning, and
Librarianship / Information Science. She has
presented at many DAMA, TDWI and Data
Modeling Zone conferences, and has published
professionally. Pat is the 2006 winner of the
DAMA International Professional Award.
Pat is the DAMA International ICCP Director,
Project Manager for Data Exam Development,
DAMA Education Committee member, past
ICCP Board President, and a past president of
DAMA International, DAMA Chicago, and
DAMA Philadelphia / Delaware Valley. She is
the DMBOK2 Editor and was an author
contributor for two DMBOK chapters:
Documents and Content Management, and
Professional Development.
Data Modeling by Example -
Introduction and Workshop
Marco Wobben, BCP Software
In many DMZ presentations, data modeling is
described as both a craft and an art. For most
outsiders, data modeling is some kind of
magic: the data modeler interviews business
experts, studies piles of requirements, talks
some more, and then, hocus pocus, presents a
diagram with boxes, crow’s feet, arrows, etc.
Fact based information modeling is the very
opposite of such magic. It does not require
people to understand the modeler’s magical
language of boxes and arrows. Instead, fact
based information modeling uses natural
language to describe sample facts that are
intelligible for business people. Therefore, it is
also known as “Data Modeling by Example”.
Part 1 – Introduction
The presentation highlights:
the origins and key elements of fact
based modeling;
its place in the requirements
engineering process;
usage in large scale information
management;
some forward engineering capabilities
(ER, UML);
Part 2 – Workshop
In the workshop you will install a free
modeling tool on your own windows computer
and practice with:
verbalizing elementary facts;
modeling with fact expressions;
visualizing and validating your model;
generating output for business experts
and software engineers.
Marco Wobben is partner of BCP Software and
has been developing software for more than 30
years. He has developed applications for
Data Modeling Zone 2015
Page 32
financial experts, software to remotely operate
bridges and a wide range of web applications.
For the past 10 years, he has been the main
developer of CaseTalk, the CASE tool for fact
based information modeling, which is widely
used in universities in the Netherlands and
beyond.