Boston – Code Camp #29

I am excited about the opportunity to present at Code Camp 29 in the surrounding Boston area. It will be a good time to meet old friends and make new ones.

I hope you have time to attend this awesome free event on November 18, 2017 at the Microsoft Office, Five Wayside Road, Burlington, MA.

 

Here are the details behind the first presentation that I gave that day.

Topic:

Staging data for Azure SQL Services

Abstract:

Most companies are faced with the ever-growing big data problem. It is estimated that there will be 40 zettabytes of new data generated between 2012 to 2020. See the computer world article for details.

Most of this data will be generated by sensors and machines. However, only a small portion of the data is available for users. How can IT professionals help business lines gather and process data from various sources?

There have been two schools of thought on how to solve this problem.

Schema on write is represented by the traditional relational database. Raw data is ingested by an extract, transform and load (ETL) process. The data is stored in tables that enforce integrity and allow for quick retrieval. Only a small portion of the total data owned by the company resides in the database.

Schema on read is represented by technologies such as Hadoop or PolyBase. These technologies assumed that data integrity was applied during the generation of the text files. The actual definition of the table is applied during the read operation. All data owned by the company can reside in simple storage.

Today, we will learn how to stage data using Azure blob storage. This staged data can be ingested by both techniques.

Presentation Bundle

 

Here are the details behind the second presentation that I gave that day.

Topic:

Introduction to Azure Table Storage

Abstract:

Azure Table Storage is Microsoft’s first attempt at a NoSQL database that uses a key value data store. During this one hour session, you will learn how to use Power Shell to manage table storage objects.

There are a bunch of cmdlets in the Power Shell gallery that can be used to insert, update, delete and select row data from the key value store. A real life example using historical S&P 500 stock data will explained and tuned for performance.

Presentation Bundle

 

Here are the details behind the third presentation that I gave that day.

Topic:

Introduction to Azure Cosmos Database

Abstract:

Azure Cosmos database is Microsoft’s new globally distributed, multi-model database service. While there is support for four different application programming interfaces today, we are going to use the SQL API to manage documents store that uses the JSON format.

During this session, we will be covering the object hierarchy that is used to group and manage documents. We will talk about the different ways to load data into the Not Only SQL (NoSQL) database. Security can be implemented at many different levels when deploying the service. JavaScript can be used to extend the database with stored procedures, triggers and functions. Last but not least, we will talk about dynamic reporting using Power BI, the Cosmos DB connector and SQL queries.

In short, you will have a good understanding of Cosmos database by the end of the session.

Presentation Bundle

 

Data Sets:

Data taken from Yahoo Financials for the S&P 500 stock list is saved in both Comma Separated Value (CSV) and JavaScript Object Notation (JSON) formats. These data sets are used in the above presentations.

Description Format Link
S&P 500 – 2013 CSV Download
S&P 500 – 2014 CSV Download
S&P 500 – 2015 CSV Download
S&P 500 – 2016 CSV Download
S&P 500 – 2017 CSV Download

 

Related posts

Leave a Comment