10 Feb 2017
13:30 - 15:30

Data modelling in the age of Big Data

Course overview:

The big data phenomenon expands the purpose and changes the role of data modelling. The level of uncertainty about data modelling in today’s data ecosystems is high. Most practitioners have more questions than answers. Has data modelling become obsolete? Does unstructured data make modelling impractical? Does NoSQL imply no data modelling? What are the implications of schema-on-read vs. schema-on-write for data modellers? Do entity-relationship and star-schema data models still matter?


You will learn:

Data modelling is still an important process—perhaps more important than ever before. But data modelling purpose and processes must change to keep pace with the rapidly evolving world of data. This course examines the principles, practices, and techniques that are needed for effective modelling in the age of big data.

  • To distinguish between data store modelling (schema on write) and data access modelling (schema on read) and when each is useful
  • The elemental characteristics of data that provide a common denominator for data modelling for all types of data
  • How the common denominator is used to map various kinds of databases, including relational, dimensional, NoSQL, NewSQL, graph, and document
  • When traditional logical-to-physical modelling works and when it makes sense to reverse the process as physical-to-logical
  • Trade-offs between methodological rigor and discovery-driven exploration in data modelling

Geared to:

Data architects; data modellers; database developers; data integrators; data analysts; report developers; anyone else challenged with the need to make structured enterprise data and non-traditional data sources work together.