I am excited about the opportunity to present at Code Camp 29 in the surrounding Boston area. It will be a good time to meet old friends and make new ones.
I hope you have time to attend this awesome free event on November 18, 2017 at the Microsoft Office, Five Wayside Road, Burlington, MA.
Here are the details behind the first presentation that I gave that day.
Staging data for Azure SQL Services
Most companies are faced with the ever-growing big data problem. It is estimated that there will be 40 zettabytes of new data generated between 2012 to 2020. See the computer world article for details.
Most of this data will be generated by sensors and machines. However, only a small portion of the data is available for users. How can IT professionals help business lines gather and process data from various sources?
There have been two schools of thought on how to solve this problem.
Schema on write is represented by the traditional relational database. Raw data is ingested by an extract, transform and load (ETL) process. The data is stored in tables that enforce integrity and allow for quick retrieval. Only a small portion of the total data owned by the company resides in the database.
Schema on read is represented by technologies such as Hadoop or PolyBase. These technologies assumed that data integrity was applied during the generation of the text files. The actual definition of the table is applied during the read operation. All data owned by the company can reside in simple storage.
Today, we will learn how to stage data using Azure blob storage. This staged data can be ingested by both techniques.
Here are the details behind the second presentation that I gave that day.
Introduction to Azure Table Storage
Azure Table Storage is Microsoft’s first attempt at a NoSQL database that uses a key value data store. During this one hour session, you will learn how to use Power Shell to manage table storage objects.
There are a bunch of cmdlets in the Power Shell gallery that can be used to insert, update, delete and select row data from the key value store. A real life example using historical S&P 500 stock data will explained and tuned for performance.
Here are the details behind the third presentation that I gave that day.
Introduction to Azure Cosmos Database
Azure Cosmos database is Microsoft’s new globally distributed, multi-model database service. While there is support for four different application programming interfaces today, we are going to use the SQL API to manage documents store that uses the JSON format.
In short, you will have a good understanding of Cosmos database by the end of the session.
|S&P 500 – 2013
|S&P 500 – 2014
|S&P 500 – 2015
|S&P 500 – 2016
|S&P 500 – 2017