New York City – SQL SATURDAY #716

I am enthusiastic about presenting at SQL Saturday #716 New York City at the Microsoft office in New York City on May 19, 2018. Of course, I am looking forward to seeing new friends, making new acquaintances, and learning something new.

The meeting venue is at the following address.

Microsoft Technology Center
11 Times Square
New York, NY 10036

Details about the presentations are below.

Topic:

Standard and Custom Auditing of Azure SQL Database

Abstract:

The process of classifying a company into an industry segment has been around since the 1950’s. Wikipedia has listed several popular taxonomies that are in current use. Some industries are more regulated and have stricter compliance regulations than others. As a database administrator, how can we provide an audit trail to a compliance officer when a security issue has occurred?

Coverage:

1 – Azure Auditing using Blob Storage
2 – Table auditing using after triggers
3 – Using database triggers for object auditing
4 – Using custom stored procedures for fine grain audits

Details:

Presentation Bundle

  

Topic:

Staging data for Azure SQL services

Abstract:

Most companies are faced with the ever-growing big data problem.  It is estimated that there will be 40 zettabytes of new data generated between 2012 to 2020.  See the computer world article for details. 

Most of this data will be generated by sensors and machines.   However, only a small portion of the data is available for users.  How can IT professionals help business lines gather and process data from various sources?

There have been two schools of thought on how to solve this problem.

Schema on write is represented by the traditional relational database. Raw data is ingested by an extract, transform and load (ETL) process. The data is stored in tables that enforce integrity and allow for quick retrieval. Only a small portion of the total data owned by the company resides in the database.

Schema on read is represented by technologies such as Hadoop or PolyBase. These technologies assumed that data integrity was applied during the generation of the text files. The actual definition of the table is applied during the read operation. All data owned by the company can reside in simple storage.

Today, we will learn how to stage data using Azure blob storage. This staged data can be ingested by both techniques.

Coverage:

1 – Grab some big data.
2 – Create blob storage account.
3 – Copy data to container.
4 – Azure SQL database plumbing.
5 – Loading data with BULK INSERT.
6 – Azure SQL data warehouse plumbing.
7 – Loading data with POLYBASE.
8 – Azure automation with RUNBOOKS.

Details:

presentation bundle

Related posts

Leave a Comment