Author: Nithin Mohan TK

New Microsoft Azure Certifications

September 16, 2018 Azure, Azure SDK, Azure Tools, Certification, Emerging Technologies, MCP, Microsoft, Microsoft Learning, Windows Azure Development No comments

Microsoft has recently announced new certification exam tracks for Azure Administrators, Developers and Architects. Here are the line ups that should help you move your career with right certifications. 

The three new Microsoft Azure Certifications are:

  • Microsoft Certified Azure Developer
  • Microsoft Certified Azure Administrator
  • Microsoft Certified Azure Architect

These certifications would essentially split the previous MCSA/MCSE: Cloud Platform and Infrastructure track and introduce new exams for individual certification track. 

So far I only have limited information available about all the exam numbers for each individual track, as recently Microsoft has made BETA exams available for Microsoft Certified Azure Administrator track. 

These exams are still in BETA, would commence general availability in coming months.  Will keep you posted about newer exams for other tracks as we get to know more. 

References: https://www.microsoft.com/en-us/learning/exam-list.aspx 

Enterprise Architecture

August 12, 2018 Architectures, Software/System Design, TOGAF No comments

What is an Enterprise Architecture?

In this modern world there is lots of confusion about enterprise architecture, in that sense I would write a short scribble about enterprise architecture or called as EA in short.

I will start with a definition by Architecture and Governance Magazine, Issue 9-4, November (2013) :

Enterprise architecture (EA) is “a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy.

Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies.

These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes.”

Now that sums up as EA is putting together a practice for translating business goals and strategy into practical enterprise to  Business Process + Information Systems (Data and Applications)  and Technologies within an “Enterprise”.  This also conclude a desired state of the enterprise and facilitate towards its change.

How do you achieve that? Answer is short, through Architecture Governance through a selection of desired Architecture Framework. That makes EA an essential practice that at any organizational level to have all the systems functioning as desired.

Goals of enterprise architecture are:

  1. Effectiveness
  2. Efficiency
  3. Agility
  4. Durability.

Subsets/Layers of enterprise architecture or Architecture domain:

There are four kinds of “architecture” that are commonly accepted as subsets of a well defined Enterprise Architecture system:

  1. Business Architecture:
  2. Data Architecture:
  3. Application Architecture:
  4. Technology Architecture:

image

NIST Enterprise Architecture Model initiated in 1989, one of the earliest frameworks for enterprise architecture. (Courtesy: Wikipedia.)

What is an Enterprise Architecture Framework?

An enterprise architecture framework (EA framework) defines how to create and use an enterprise architecture.

As per Wikipedia there are countless EA frameworks and some of them are categorized as below: (Courtesy: Wikipedia.)

Consortia-developed frameworks:
  • ARCON – A Reference Architecture for Collaborative Networks – not focused on a single enterprise but rather on networks of enterprises.
  • Generalised Enterprise Reference Architecture and Methodology (GERAM)
  • RM-ODP – the Reference Model of Open Distributed Processing (ITU-T Rec. X.901-X.904 | ISO/IEC 10746) defines an enterprise architecture framework for structuring the specifications of open distributed systems.
  • IDEAS Group – a four-nation effort to develop a common ontology for architecture interoperability
  • ISO 19439 Framework for enterprise modelling
  • TOGAF – The Open Group Architecture Framework – a widely used framework including an architectural Development Method and standards for describing various types of architecture.

Defence industry frameworks:

  • AGATE – the France DGA Architecture Framework
  • DNDAF – the DND/CF Architecture Framework (CAN)
  • DoDAF – the US Department of Defense Architecture Framework
  • MODAF – the UK Ministry of Defence Architecture Framework
  • NAF – the NATO Architecture Framework

Government frameworks:

  • European Space Agency Architectural Framework (ESAAF) – a framework for European space-based Systems of Systems
  • Government Enterprise Architecture (GEA) – a common framework legislated for use by departments of the Queensland Government
  • FDIC Enterprise Architecture Framework
  • Federal Enterprise Architecture Framework (FEAF) – a framework produced in 1999 by the US Federal CIO Council for use within the US Government (not to be confused with the 2002 Federal Enterprise Architecture (FEA) guidance on categorizing and grouping IT investments, issued by the US Federal Office of Management and Budget)
  • Nederlandse Overheid Referential Architecture (NORA) – a reference framework from the Dutch Government E-overhead NORA
  • NIST Enterprise Architecture Model
  • Treasury Enterprise Architecture Framework (TEAF) – a framework for treasury, published by the US Department of the Treasury in July 2000.

Open-source frameworks:

Enterprise architecture frameworks that are released as open source:

  • MEGAF is an infrastructure for realizing architecture frameworks that conform to the definition of architecture framework provided in ISO/IEC/IEEE 42010.
  • Praxeme, an open enterprise methodology, contains an enterprise architecture framework called the Enterprise System Topology (EST)
  • TRAK – a general systems-oriented framework based on MODAF 1.2 and released under GPL/GFDL.
  • SABSA is an open framework and methodology for Enterprise Security Architecture and Service Management, that is risk based and focuses on integrating security into business and IT management.

Proprietary frameworks:

  • ASSIMPLER Framework – an architecture framework, based on the work of Mandar Vanarse at Wipro in 2002
  • Avancier Methods (AM) Processes and documentation advice for enterprise and solution architects, supported by training and certification.
  • BRM (Build-Run-Manage) Framework – an architecture framework created by Sanjeev “Sunny” Mishra during his early days at IBM in 2000.
  • Capgemini Integrated Architecture Framework (IAF) – from Capgemini company in 1993
  • Dragon1 – An open Visual Enterprise Architecture Method recently recognized by The Open Group as Architecture Framework
  • DYA framework developed by Sogeti since 2004.
  • Dynamic Enterprise Enterprise architecture concept based on Web 2.0 technology
  • Extended Enterprise Architecture Framework – from Institute For Enterprise Architecture Developments in 2003
  • EACOE Framework  – an Enterprise Architecture framework, as an elaboration of the work of John Zachman
  • IBM Information FrameWork (IFW) – conceived by Roger Evernden in 1996
  • Pragmatic Enterprise Architecture Framework (PEAF) – part of Pragmatic Family of Frameworks developed by Kevin Lee Smith, Pragmatic EA, from 2008
  • Purdue Enterprise Reference Architecture developed by Theodore J. Williams at the Purdue University early 1990s.
  • SAP Enterprise Architecture Framework
  • Service-oriented modeling framework (SOMF), based on the work of Michael Bell
  • Solution Architecting Mechanism (SAM) – A coherent architecture framework consisting of a set of integral modules.
  • Zachman Framework – an architecture framework, based on the work of John Zachman at IBM in the 1980s

Hope that covers the initial concepts of Enterprise Architecture. Later sessions I would write more on an interesting and widely used Enterprise Architecture framework called TOGAF – The Open Group Architecture Framework. 

Read about my previous article in the mean time: TOGAF 9.1 Certified

References:

Introduction to NDepend : Static Code Analysis Tool

June 16, 2018 .NET, .NET Core, .NET Framework, ASP.NET, Best Practices, C#.NET, Code Analysis, Code Quality, Dynamic Analysis, Emerging Technologies, Help Articles, Microsoft, Static Analysis, Tech-Trends, Tools, Tools, Visual Studio 2017, VisualStudio, Windows No comments , , , , , ,

As a developer, you always have to take the pain of getting adapted to the best practices and coding guidelines to be followed as per the organizational or industrial standards.  Easy way to ensure your coding style follows certain standard is to manually analyze your code or use a static code analyzer like FxCop, StyleCop etc. Earlier days I have been a fan of FxCop as it was free and it provides me all necessary general guidelines in terms  of improving my solution.

In this modern world of programming everything needs to be automated, as it saves time and money in terms of automating repetitive tasks and improves efficiency. This is where static code analysers coming effective.

What is Static Code Analysis?

Static program analysis is the analysis of computer software that is performed without actually executing programs, on some version of the program source code, and in the other cases, some form of the object code or intermediate compiled code .

Sophistication of static program analysis increases is based on how deep they analyze in terms of behavior of individual statements and declarations, to analyzing the entire source code.

PS: Analysis performed on executing programs is known as dynamic analysis.

In this article I will give you an overview of one such premier static code analysis tool that can be used for your daily development routine plus use it for CI integration for DevOps efficiency.

NDepend:

NDepend is a static analysis tool for .NET, specifically for managed code:  NDepdend supports a large number of code metrics, allowing to visualize dependencies using directed graphs and dependency matrix. It also performs code base snapshots comparisons, and validation of architectural and quality rules.

The important capabilities of NDepend are:

  • Dependency Visualization through dependency matrix and graphs.
  • Analyse and generate software quality metrics – as per the documentation it supports 82 quality metrices.
  • Declarative rule support through LINQ queries, and it is called CQLinq and comes with a large number of predefined CQLinq rules.
  • Integration support for Cruise Control.Net, SonarCube, am City. Code rules can be configured to be checked automatically in Visual Studio or during continuous integration(CI).

License: NDepend is a commercial tool with licensing options as below:

  1. Developer seats – $477 approx. / per seat.
  2. Build Machine seats  – $955 approx. / per seat.

** You could get volume discount if you bulk procure your licenses.

Installation: 

Once you obtained license you will able to download NDepend_2018.1.1.9041.zip, is latest version available while I write this article. Extract the zip file into your local folder, you could see the different packages/executables within the package.

image

1.) NDepend.Console    – Command line program to execute NDepend analysis.  You would be mostly using this component on CI Build server Help

2.) NDepend.PowerTools –  Helps write your own static analyzer based on NDepend.API, or tweak existing open-source Power Tools. Help

image

3.) NDepend.VisualStudioExtension.Installer – To install NDepend extension as part of Visual studio

image

4.) VisualNDepend – Independent visual environment for managing your NDepend tasks.

image

Visual Tool gives you different options to choose from:

  • You can analyse a Visual Studio Solution or project.
  • Analyse .NET assemblies in a folder.

image

image

image

For the demo purpose our analysis target would be one of the starter project from github –  ContosoUniversity by @alimon808.

image

image

Demo: Summary Report

image

Demo: Application Metrics

image

Demo: Dependency Dashboard:

image

Demo: Interactive Graph

image

Demo: Code Matrix View

image

Demo: Quality Gates Summary

image

Demo: Rules Summary

image

Conclusion:

NDepend is one of the best enterprise grade commercial static analyser seen so far.  There are Visual Studio Code Analysis, FxCop and Stylecop Analyzer tools available but they do not provide extensive level of analysis reports NDepend provides. Being a commercial tool it gives value for money for customers by what they need.  In terms of a day to day developer  or devops lifecycle, you can integrate NDepend in your build process, which could be simple as executing the NDepend Console and reviewing the output. With NDepend’s API it is easy to develop your own custom analysis tools based on CQLinq and NDepend.PowerTools(which is open source). You could find all the detailed help in NDepend documentation.

References:

PowerShell: Check a parameter/variable value is null

June 8, 2018 PowerShell, Scripting No comments

While you are writing PowerShell modules, with lot with parameters and you might want to verify these parameters are not ‘null’ to validate some business cases. In normal powershell inline scripting context, $variablename -eq “$null” would work :

if ($varibalename -eq $null) 
{ 
Write-Host "variable is null.Please supply the values for variablename." 
}

RECOMMENDED APPROACH:
Efficient way of checking this inside a module is to use:

if (!$variablename) 
{ 
Write-Host "variable is null.Please supply the values for variablename." 
}

If you would want to verify $variablename has any value except $null:

if ($variablename) 
{ 
Write-Host "variablename is not null. do something here." 
}

Node.js 9.x.x and npm 6.x.x – “npm audit” to identify and fix security vulnerabilities in dependencies

June 3, 2018 JavaScript, Javascript Development, Modern Web Development, Node.js, NPM, OpenSource, Package Manager, Tech Newz, TypeScript, Web No comments

nodejs-npm

It has been a while I have been reading about the major changes that areintroduced in Node.js 9.x.x / NPM 6.x.x and myself faced by Node.js application going to a toss after I upgraded to Node.js 9.x.x, as I always keep Node.js up to date in my development environment.

I use NVM(Node Virtual Manager) to switch between different version of Node.js and I love the flexiblity NVM provides. So I was able to quickly switch back to 8.x.x version, when I figured out this change.

But npm packgage downgrade did not work using “npm install –g npm@5.x.x” due to old traces of 6.x.x   I had to clean up my npm cache and do npm install again.

Introduction – The “npm audit” command:

Recently with 6.0.0 NPM team has introduced many improvements such as :

a.) Provide protection against insecure code into the workflow during your npm install . When a user downloads code from the npm Registry, npm will review the request against the Node Security Platform database and return a warning if the code contains a vulnerability.

b.) Package signing for publishers.   npm-signature field will allow users of npm packages to verify the integrity of the package regardless of the tools they use to retrieve it or the registry from which they download it.

c.) Security auditing capability (which I am covering in this article).

The audit capability, which provides an ability to perform a security audit  on your project and dependency components.  To simplify it provides a moment-in-time security review of your project’s dependency tree.

  • It will scan your project for any vulnerabilities. 
  • You can choose the option to automatically install the compatible updates vulnerable dependencies.
  • Audit reports contain information about security vulnerabilities in your dependencies.
  • This report also contains necessary steps to be taken to fixe these vulnerability. For example, by running an npm install <package>@new-version.
  • It would work very well with your private/enterprise registries such as artifactory etc. 
  • It  will allow the developer to recursively analyze trees of dependent code to identify specifically what’s insecure.

The audit command submits a description of the dependencies configured in your project to your default registry and asks for a report of known vulnerabilities.

Quick Insight on the new commands:

  • npm audit      – Scan your project for vulnerabilities and just show the details, without fixing anything.
  • npm audit [—json]      – To provide report in Json format.
  • npm audit fix   – to scan and fix all vulnerabilities
  • npm audit fix –only=prod     – to skip updating devDependecies
  • npm audit fix –force  – will install semver-major updates to all top level dependencies.
  • npm audit fix –dry-run –json   – to do a dry run on the fixes and provide you a report.

NB: Npm audit fix runs a full  npm install under the hood, all configs that apply to the “npm audit fix”  will also apply to npm install.

References:

CosmosDb – Programatically Connect to a preferred location using the SQL API

May 29, 2018 .NET, Azure, CosmosDB, Microsoft, VisualStudio, Windows, Windows Azure Development No comments ,

Cosmos Db is a multi-region scallable, globally-distributed database solution as part of Microsoft Azure Platform.  With a button click, Azure Cosmos DB enables you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions. It offers throughput, latency, availability, and consistency guarantees with comprehensive service level agreements (SLAs),  that no other database service can offer. [REF]

What is multi-region scalability or global distribution ?

What it means is that once you select this option, and underlying platform will ensure that your main database is replicated across other global regions you have defined.

So when a customer/application requests the data from a certain geo location:

  1. Cosmos Db will serve the data from nearest available regional copy to provide low latency in accessing the database.  Inorder to achieve it is recommended to deploy both the application and Azure Cosmos DB in the regions that correspond.
  2. Incase that nearest available region is not defined, it would serve from nearest available or main copy. This could be East US or West US depending on your deployment decisions.
  3. As BCDR(Business Continuity and Disaster Recovery) plan, Incase main copy is not available, it would faillover to serve the requests from any backup region.  

Benefits?

  • Ensured AVAILABILITY @ 99.99% – Azure Cosmos DB offers low latency reads and writes at the 99th percentile worldwide.
  • Faster READS: It ensures that all reads are served from the closest (local) region.  To serve a read request, the quorum local to the region in which the read is issued is used.
  • Reliable WRITES: The same applies to writes. A write is acknowledged only after a majority of replicas have durably committed the write locally but without being gated on remote replicas to acknowledge the writes.

PS: The replication protocol of Azure Cosmos DB operates under the assumption that the read and write quorums are always local to the region where the request has been issued.

How to turn on – Cosmos Db and multi-region replication?

In CosmosDb instance settings select Replicate data globally page, then select the regions to add or remove by clicking regions in the map.

Azure Cosmos DB enables you to configure the regions (associated with the database) for “read”, “write” or “read/write” regions.

image

image 

image

Then configure Manual/Automatic failover options as well. image I would cover this in later articles.

All that said, you are in good hands of Azure Platform as a  Cosmos Db customer or user. 

NB: For the purpose of this article, I have configured my instance to run different regions with write region as East US and read region as West Europe,North Europe and West US.

image

Programatically Connect to a preferred location using the SQL API:

Now coming to the context of this blog, as a application developer some times you would like to programatically control the access to these regions while using Cosmos Db .NET SQL API. 

In CosmosDb.NET SDK version 1.8 and later, there is the ConnectionPolicy parameter for the DocumentClient constructor has a property called Microsoft.Azure.Documents.ConnectionPolicy.PreferredLocations

  • All reads will be sent to the first available region in the PreferredLocations list. If the request fails, the client will fail down the list to the next region, and so on.
  • SDK will automatically send all writes to the current write region.
  • SDK will only attempt to read from the regions specified in PreferredLocations.
  • For example: If you have 4 read regions defined in your cosmos Db instance and you only have 2 regions defined in PreferredLocations in connectionPolicy, requests from other two regions would never be served from SDK.

NB: The client application can verify the current write endpoint and read endpoint chosen by the SDK by checking two properties, WriteEndpoint and ReadEndpoint. **SDK version 1.8+.

Following code snippet would make it easiter to implement:

 
   //Setting read region selection preference. 
   connectionPolicy.PreferredLocations.Add(LocationNames.EastUS); // applications first preference
   connectionPolicy.PreferredLocations.Add(LocationNames.WestEurope); // applications second preference

Full Source Code: https://github.com/AzureContrib/CosmosDB-DotNet-Quickstart-Preferred-Location 

References: