Clientele ➞

Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB [DP-420]

Small-DP-420

Duration: 4 Days

Description

Note: Currently the exam is available in Beta version Candidates for this exam must have solid knowledge and experience developing apps for Azure and working with Azure Cosmos DB database technologies. They should be proficient at developing applications by using the Core (SQL) API and SDKs, writing efficient queries and creating appropriate index policies, provisioning and managing resources in Azure, and creating server-side objects with JavaScript. They should be able to interpret JSON, read C# or Java code, and use PowerShell.    Course Fee: $ 1595  ...Read more

Objectives

  • Design and implement data models  
  • Design and implement data distribution  
  • Integrate an Azure Cosmos DB solution  
  • Optimize an Azure Cosmos DB solution  
  • Maintain an Azure Cosmos DB solution  

Who Should Attend

A candidate for the Azure Cosmos DB Developer Specialty certification should have subject matter expertise designing, implementing, and monitoring cloud-native applications that store and manage data. Responsibilities for this role include designing and implementing data models and data distribution, loading data into an Azure Cosmos DB database, and optimizing and maintaining the solution. These professionals integrate the solution with other Azure services. They also design, implement, and monitor solutions that consider security, availability, resilience, and performance requirements. A candidate for this exam must have solid knowledge and experience developing apps for Azure and working with Azure Cosmos DB database technologies. They should be proficient at developing applications by using the Core (SQL) API and SDKs, writing efficient queries and creating appropriate index policies, provisioning and managing resources in Azure, and creating server-side objects with JavaScript. They should be able to interpret JSON, read C# or Java code, and use PowerShell. 

Prerequisites

  •  Knowledge of ability to navigate the Azure portal (Equivalent of AZ-900) and Microsoft Azure 
  • Experience of Azure supported language at the intermediate level (Python, Java, JavaScript or C#) 
  • Ability to write code to connect and perform operations on a NoSQL or SQL database product (Oracle, MongoDB, Cassandra, SQL Server or similar)  

Course Outline

Module 1: Design and Implement Data Models  

Design and implement a non-relational data model for Azure Cosmos DB Core API  

  • Developa design by storing multiple entity types in the same container  
  • Developa design by storing multiple related entities in the same document  
  • Developa model that denormalizes data across documents  
  • Developa design by referencing between documents  
  • Identifyprimary and unique keys  
  • Identifydata and associated access patterns  
  • Specifya default TTL on a container for a transactional store  

Design a data partitioning strategy for Azure Cosmos DB Core API  

  • Choosea partition strategy based on a specific workload  
  • Choosea partition key  
  • Planfor transactions when choosing a partition key  
  • Evaluatethe cost of using a cross-partition query  
  • Calculateand evaluate data distribution based on partition key selection  
  • Calculateand evaluate throughput distribution based on partition key selection  
  • Constructand implement a synthetic partition key  
  • Designpartitioning for workloads that require multiple partition keys  

Plan and implement sizing and scaling for a database created with Azure Cosmos DB  

  • Evaluatethe throughput and data storage requirements for a specific workload  
  • Choosebetween serverless and provisioned models  
  • Choosewhen to use database-level provisioned throughput  
  • Designfor granular scale units and resource governance  
  • Evaluatethe cost of the global distribution of data  
  • Configurethroughput for Azure Cosmos DB by using the Azure portal  

Implement client connectivity options in the Azure Cosmos DB SDK  

  • Choosea connectivity mode (gateway versus direct)  
  • Implementa connectivity mode  
  • Createa connection to a database  
  • Enableoffline development by using the Azure Cosmos DB emulator  
  • Handleconnection errors  
  • Implementa singleton for the client  
  • Specifya region for global distribution  
  • Configureclient-side threading and parallelism options  
  • EnableSDK logging  

Implement data access by using the Azure Cosmos DB SQL language  

  • Implementqueries that use arrays, nested objects, aggregation, and ordering  
  • Implementa correlated subquery  
  • Implementqueries that use array and type-checking functions  
  • Implementqueries that use mathematical, string, and date functions  
  • Implementqueries based on variable data 

Implement data access by using SQL API SDKs  

  • Choosewhen to use a point operation versus a query operation  
  • Implementa point operation that creates, updates, and deletes documents  
  • Implementan update by using a patch operation  
  • Managemulti-document transactions using SDK Transactional Batch  
  • Performa multi-document load using SDK Bulk  
  • Implementoptimistic concurrency control using ETags  
  • Implementsession consistency by using session tokens  
  • Implementa query operation that includes pagination  
  • Implementa query operation by using a continuation token  
  • Handletransient errors and 429s  
  • SpecifyTTL for a document  
  • Retrieveand use query metrics  

Implement server-side programming in Azure Cosmos DB Core API by using JavaScript  

  • Write, deploy, and call a stored procedure 
  • Designstored procedures to work with multiple items transactionally  
  • Implementtriggers  
  • Implementa user-defined function  

Module 2: Design and Implement Data Distribution  

Design and implement a replication strategy for Azure Cosmos DB  

  • Choosewhen to distribute data  
  • Defineautomatic failover policies for regional failure for Azure Cosmos DB Core API  
  • Performmanual failovers to move single master write regions  
  • Choosea consistency model  
  • Identifyuse cases for different consistency models  
  • Evaluatethe impact of consistency model choices on availability and associated RU cost  
  • Evaluatethe impact of consistency model choices on performance and latency  
  • Specifyapplication connections to replicated data  

Design and implement multi-region write  

  • Choosewhen to use multi-region write  
  • Implementmulti-region write  
  • Implementa custom conflict resolution policy for Azure Cosmos DB Core API  

Module 4: Integrate an Azure Cosmos DB Solution  

Enable Azure Cosmos DB analytical workloads  

  • EnableAzure Synapse Link  
  • Choosebetween Azure Synapse Link and Spark Connector  
  • Enablethe analytical store on a container  
  • Enablea connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL  
  • Performa query against the transactional store from Spark  
  • Writedata back to the transactional store from Spark  

Implement solutions across services  

  • Integrateevents with other applications by using Azure Functions and Azure Event Hubs  
  • denormalizedata by using Change Feed and Azure Functions  
  • Enforcereferential integrity by using Change Feed and Azure Functions  
  • Aggregatedata by using Change Feed and Azure Functions, including reporting  
  • Archivedata by using Change Feed and Azure Functions  
  • ImplementAzure Cognitive Search for an Azure Cosmos DB solution  

Module 3: Optimize an Azure Cosmos DB Solution  

Optimize query performance in Azure Cosmos DB Core API  

  • Adjustindexes on the database  
  • Calculatethe cost of the query  
  • Retrieverequest unit cost of a point operation or query  
  • ImplementAzure Cosmos DB integrated cache  

Design and implement change feeds for an Azure Cosmos DB Core API  

  • Developan Azure Functions trigger to process a change feed  
  • Consumea change feed from within an application by using the SDK  
  • Managethe number of change feed instances by using the change feed estimator  
  • Implementdenormalization by using a change feed  
  • Implementreferential enforcement by using a change feed  
  • Implementaggregation persistence by using a change feed  
  • Implementdata archiving by using a change feed  

Define and implement an indexing strategy for an Azure Cosmos DB Core API  

  • Choosewhen to use a read-heavy versus write-heavy index strategy  
  • Choosean appropriate index type  
  • Configurea custom indexing policy by using the Azure portal  
  • Implementa composite index  
  • Optimizeindex performance  

Module 4: Maintain an Azure Cosmos DB Solution  

Monitor and Troubleshoot an Azure Cosmos DB solution  

  • Evaluateresponse status code and failure metrics  
  • Monitormetrics for normalized throughput usage by using Azure Monitor  
  • Monitorserver-side latency metrics by using Azure Monitor  
  • Monitordata replication in relation to latency and availability  
  • ConfigureAzure Monitor alerts for Azure Cosmos DB  
  • Implementand query Azure Cosmos DB logs  
  • Monitorthroughput across partitions  
  • Monitordistribution of data across partitions  
  • Monitorsecurity by using logging and auditing  

Implement backup and restore for an Azure Cosmos DB solution  

  • Choosebetween periodic and continuous backup  
  • Configureperiodic backup  
  • Configurecontinuous backup and recovery  
  • Locatea recovery point for a point-in-time recovery  
  • Recovera database or container from a recovery point  

Implement security for an Azure Cosmos DB solution  

  • Choosebetween service-managed and customer-managed encryption keys  
  • Configurenetwork-level access control for Azure Cosmos DB  
  • Configuredata encryption for Azure Cosmos DB  
  • Managecontrol plane access to Azure Cosmos DB by using Azure role-based access control (RBAC)  
  • Managedata plane access to Azure Cosmos DB by using keys  
  • Managedata plane access to Azure Cosmos DB by using Azure Active Directory  
  • ConfigureCross-Origin Resource Sharing (CORS) settings  
  • Manageaccount keys by using Azure Key Vault  
  • Implementcustomer-managed keys for encryption  
  • ImplementAlways Encrypted  

Implement data movement for an Azure Cosmos DB solution  

  • Choosea data movement strategy  
  • Movedata by using client SDK bulk operations  
  • Movedata by using Azure Data Factory and Azure Synapse pipelines  
  • Movedata by using a Kafka connector  
  • Movedata by using Azure Stream Analytics  
  • Movedata by using the Azure Cosmos DB Spark Connector 

Implement a DevOps process for an Azure Cosmos DB solution  

  • Choosewhen to use declarative versus imperative operations  
  • Provisionand manage Azure Cosmos DB resources by using Azure Resource Manager templates (ARM templates)  
  • Migratebetween standard and autoscale throughput by using PowerShell or Azure CLI  
  • Initiatea regional failover by using PowerShell or Azure CLI  
  • Maintainindex policies in production by using ARM templates 

About The Trainer

A Certified Microsoft Azure Trainer

Course Fee

$1595

For Assistance

For latest batch dates, fees, location, technical queries, and general inquiries, contact Mr. Bhavesh Goswami at +91 7618705318 or email at bhavesh@cloudthat.com

Quick Inquiry

Our Partners