CI/CD/CT at Enterprise Scale – Add label to Pull Requests

As a best practice, when we want to merge  a topic or feature branch back into the main branch we create a pull request. Developers sometimes may need to communicate some extra information with reviewers or bring their attention to specific details. In VSTS, labels provide a new way of communication for pull request.

You can tag your pull requests with labels like “Hotfix” or “Dependency” or even “DON’T MERGE” and draw reviewer attention to an important details.  Nowadays, many teams communicate things like this during the stand-ups or even through communication tools like Slack or Microsoft Team. However, when it comes to enterprise level app and big teams, traditional ways are not really practical because communications can easily get lost between team members.

CI/CD/CT at Enterprise Scale – Add Continuous Testing to Pull Requests

As a good practice we always branch out our feature branch from the main branch (E.g. master) and when we are done with the development, we send a pull request to merge changes back into the main branch.

Due to the fact that main branch always evolving and going ahead with changes from other branches, when it comes to merging back a feature branch to the main branch, changes could break the code. While we expect team members to update their feature branches with latest changes from the main branch before sending a pull request to reduce the risk of broken code, this is not working well at all the time.

This is why introducing continues testing into the pull request process removes the risk of broken code. This simply can be implemented in VSTS by adding a Build Validation as part of Branch Policies.

Navigate to list of branches, find your main branch and then select “Branch Policies”.

On the policies screen, click on Build Validations and add your build pipeline to the branch policies. Continuous test can be one of the steps in your build pipeline. You can make this policy Required or Optional.

You can also enable Test Impact Analysis (TIA) to reduce the testing and respectively building time.

You can fine more information about TIA here.

CI/CD/CT at Enterprise Scale – Enable Test Impact Analysis

As part of your CI/CD/CT pipeline you want to integrate, deploy and tests your application couple of times a day. When it comes to enterprise level application, we usually have a huge set of tests which makes it very hard and time consuming (if not impossible) to run every time as part of our continuous testing.

Test Impact Analysis (TIA) is a technique to determine the impacted tests for a given set of changes. Therefore,  you don’t need to run all of the tests every time you want to build and deploy a new version.

TIA is just a click away in VSTS – you can easily enable TIA as part of your CI/CD/CT pipeline to dramatically reduce the time needed to run the tests.

You just need to enable “Run only impacted tests” in your version 2.* and above build’s test step.

Protect your Azure Backups with Multi-Factor authentication

You may ask why would you need to protect your Azure Backups – or precisely Recovery Services Vault with MFA? When you need to restore your backup, last thing you want to deal with is deleted backups [by hackers] in a rainy day for sure! So protect it or lose it!

To enable MFA on your Recovery Services Vault, Select your Vault, select Properties, Click on Update link under Security Settings and then set Multi-Factor Authentication to Yes and Save your changes.

Enable MFA for Recovery Services Vault

Seeding data in EF Core 2.1

Microsoft announced Entity Framework Core 2.1 RC in Build 2018. Data Seeding probably is one of the most useful features of Entity Framework and it got a nice upgrade on this version. Unlike Entity Framework 6.0, Seeding data is associated with an entity type as part of the model configuration which means Entity Framework can now track added, updated or removed data from one change to another change and generates the corresponding migration script for the changes.

Seeding data in Entity Framework Core now is as easy as:

modelBuilder.Entity<Book>().HasData(new Book {BookId = 1, Title = "Don't read this book"});

Stop Auto Reopen of Programs after Restart in Windows 10

Windows 10 recently introduce a new feature via Fall Creator update which reopens (or kind of restores) all bookmarked apps from the last shutdown. If you want to disable this feature, simply navigate to the Settings and search for Sign-in Options and the turn offUse my sign-in info to automatically finish setting up my device after an update or restart” .

 

Windows 10 Signin options
Turn off “Use my sign-in info to automatically finish setting up my device after an update or restart” option to avoid auto re-opening apps during the startup.

Image Processing with Microsoft Cognitive Services API and Azure DocumentDB

Cognitive-Services_Computer-Vision-API_01

At build 2016, Microsoft rebranded Project Oxford and introduced it as Microsoft Cognitive Services. In total, there are 21 APIs under 5 categories available in Cognitive Service now.

Computer Vision API is one of the APIs under Vision category which is used to bring image processing into your application. This API extracts and returns rich information about visual contents found in an image.

In this article, we will learn how to utilize Computer Vision API and store the serialized result into Azure DocumentDB. Schema free databases suit perfect for this scenario because we can easily dump the data and store it as a document and also change the data structure and what we expect from the API any time.

Prerequisites:

Cognitive API subscription
Azure subscription and Azure DocumentDB Account
Visual Studio 2015 Community

Solution Structure:

Just to simplify the example, our solution will be composed of two projects. One is called Infrastructure and contains data access logic and services and the other one is an ASP.NET Core project which is called Web and we will build our API on top of it. This solution simplified by purpose but it is expected to separate data access logic from domain services into different layers in your real world application (Hopefully!).

Implementation :

The first step is to store URLs and subscription keys in a config file (rather than hard coding values in the code). For this example, we are going to create two properties called DocumentDB and CognitiveServices in API project’s appsettings.json .

{
  "Logging": {
    "IncludeScopes": false,
    "LogLevel": {
      "Default": "Debug",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  "DocumentDB": {
    "Database": "[PUT_YOUR_DATABASE_NAME_HERE(Eg. CognitiveDB)]",
    "Collection": "[PUT_YOUR_COLLECTION_NAME_HERE(Eg. Images)]",
    "Endpoint": "[PUT_YOUR_DOCUMENTDB_ACCOUNT_URI_HERE(Eg. https://xxx.documents.azure.com:443/)]",
    "AuthKey": "[PUT_YOUR_DOCUMENTDB_KEY_HERE]"
  },
  "CognitiveService": {
    "ComputerVision": {
      "Url": "https://api.projectoxford.ai/vision/v1.0/analyze?",
      "SubscriptionKey": "[PUT_YOUR_SUBSCRIPTION_KEY_HERE]",
      "ContentType": "application/json"
    }
  }
}

Currently, appsettings.json is still alive in ASP.NET Core, but it seems that ASP.NET team is going to kill this file soon. This file may by replaced by web.config or another type of config file later.

For DocumentDB property, we need DocumentDB database name, collection name, endpoint URL (DocumentDB account endpoint) and key. You can find all these information in DocumentDB account blade.

Cognitive-Services_Computer-Vision-API_02

For Cognitive Service property, we need Service URL (as you can see it points to oxford project at the moment but you always can get the latest address from official documentations), Computer Vision API subscription key and content type. for this example we specify JSON as the content type.

In this example, we are going to implement a simplified repository to work with database.  The main purpose is to show data access layer must be segregated through interfaces and also how to implement a repository pattern to work with DocumentDB. Therefore, we need to define repository contract first:

public interface IImageRepository
{
    Task CreateAsync(Image image);
}

As you can see it has been called IImageRepository and in the next step we are going to implement the repository concrete class which implements IImageRepository:

public class ImageRepository : IImageRepository
{
    private string endpoint;
    private string authKey;
    private string databaseId;
    private string collectionId;

    private DocumentClient client;
    private Database database;
    private DocumentCollection collection;

    public ImageRepository(string endpoint, string authKey, string databaseId, string collectionId)
    {
        this.endpoint = endpoint;
        this.authKey = authKey;
        this.databaseId = databaseId;
        this.collectionId = collectionId;
    }

    public DocumentClient Client
    {
        get
        {
            if (client == null)
            {
                Uri endpointUri = new Uri(this.endpoint);
                client = new DocumentClient(endpointUri, this.authKey);
            }

            return client;
        }
    }

    public Database Database
    {
        get
        {
            if (database == null)
            {
                database = ReadOrCreateDatabase();
            }

            return database;
        }
    }

    public DocumentCollection Collection
    {
        get
        {
            if (collection == null)
            {
                collection = ReadOrCreateCollection(Database.SelfLink);
            }

            return collection;
        }
    }

    private Database ReadOrCreateDatabase()
    {
        var database = this.Client.CreateDatabaseQuery()
                        .Where(d => d.Id == this.databaseId)
                        .AsEnumerable()
                        .FirstOrDefault();

        if (database == null)
        {
            database = this.Client.CreateDatabaseAsync(new Database { Id = this.databaseId }).Result;
        }

        return database;
    }

    private DocumentCollection ReadOrCreateCollection(string databaseLink)
    {
        var collection = this.Client.CreateDocumentCollectionQuery(databaseLink)
                          .Where(c => c.Id == this.collectionId)
                          .AsEnumerable()
                          .FirstOrDefault();

        if (collection == null)
        {
            collection = this.Client.CreateDocumentCollectionAsync(databaseLink, new DocumentCollection { Id = this.collectionId }).Result;
        }

        return collection;
    }

    public async Task CreateAsync(Image image)
    {
        if (string.IsNullOrEmpty(image.id))
        {
            image.id = GenerateImageId();
        }

        return await this.Client.CreateDocumentAsync(this.Collection.SelfLink, image);
    }

    private string GenerateImageId()
    {
        return Guid.NewGuid().ToString();
    }
}

In real world application it is expected to have base repository of type and all repositories must implement base repository. In Line 30 is to create DocumentDB client which is essential to work with DocumentDB. Line 46 calls a method to read or create (if it Does not exist) database. Line 62 follows the same logic to read or create collection. In line 129, a unique string ID for the document is generated. You need a better approach to generate unique document Id for real world application.

As you can see in repository definition and implementation, we are going to store an object of type Image into the database. Basically, Image is a POCO which represent the document structure and it is populated by API. Below is the definition of Image, Tag and Metadata classes:

public class Image
{
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }

    [JsonProperty(PropertyName = "tags")]
    public List Tags { get; set; }

    [JsonProperty(PropertyName = "metadata")]
    public Metadata Metadata { get; set; }
}

public class Tag
{
    [JsonProperty(PropertyName = "name")]
    public string Name { get; set; }

    [JsonProperty(PropertyName = "confidence")]
    public decimal Confidence { get; set; }

    [JsonProperty(PropertyName = "hint")]
    public string Hint { get; set; }
}

public class Metadata
{
    [JsonProperty(PropertyName = "width")]
    public int Width { get; set; }

    [JsonProperty(PropertyName = "height")]
    public int Height { get; set; }

    [JsonProperty(PropertyName = "format")]
    public string Format { get; set; }
}

As you can see, all properties annotated with newtonsoft’s JsonProperty attribute to specify how data is going to be serialized. Id (line 3) is a special property in DocumentDB and it is expected to be a unique string as mentioned before.

Next, we need to define our service contract to process images. Below is service definition which has only one method to process the image.

public interface ICognitiveService
{
    Task ProcessImage(string imageUrl);
}

Now it’s time to implement the ProcessImage method in the concrete service class. It’s again simplified to make it easier to understand.

public class CognitiveService : ICognitiveService
{
    private string uri;
    private string subscriptionKey;
    private string contentType;

    public CognitiveService(string url, string subscriptionKey, string contentType)
    {
        this.uri = ($"{url}visualFeatures=Tags");
        this.subscriptionKey = subscriptionKey;
        this.contentType = contentType;
    }

    public async Task ProcessImage(string imageUrl)
    {
        // Instantiate a HTTP Client
        var client = new HttpClient();

        // Pass subscription key thru the HTTP Request Header
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);

        // Format Request body
        byte[] byteData = Encoding.UTF8.GetBytes($"{{\"url\": \"{imageUrl}\"}}");

        using (var content = new ByteArrayContent(byteData))
        {
            // Specify Request body Content-Type
            content.Headers.ContentType = new MediaTypeHeaderValue(contentType);

            // Send Post Request
            HttpResponseMessage response = await client.PostAsync(uri, content);

            // Read Response body into the image model
            return await response.Content.ReadAsAsync();
        }

    }
}

As you can see in line 9, we only specified Tags to be returned from the API. You can extend this if you want to get other set of information about the image (e.g. categories instead of tags). You also can pass multiple comma-separated values to the API. In line 20, Computer Vision API subscription key is passed and URL of the image is passed through request body in line 23. In line 34, API response is deserialized into an Image object.

Now at API project, we need to register our repository and service to be injected into the pipeline. In this example, we will use ASP.NET Core built-in DI; Therefore, we need to register our repository and service in ConfigureServices method of Startup.cs.

services.AddSingleton(s =>
 {
     string databaseId = Configuration["DocumentDB:Database"];
     string collectionId = Configuration["DocumentDB:Collection"];
     string endpoint = Configuration["DocumentDB:Endpoint"];
     string authKey = Configuration["DocumentDB:AuthKey"];

     return new ImageRepository(endpoint, authKey, databaseId, collectionId);
 });

 services.AddScoped(s =>
{
    string url = Configuration["CognitiveService:ComputerVision:Url"];
    string subscriptionKey = Configuration["CognitiveService:ComputerVision:SubscriptionKey"];
    string contentType = Configuration["CognitiveService:ComputerVision:ContentType"];

    return new CognitiveService(url, subscriptionKey, contentType);
});

As the last step, we only need to create an method inside an API controller to orchestrate the API workflow. For this example, we will call it ProcessImage. In real world it is expected to have better validation and also exception management which is not implemented here for the sake of simplicity.

[HttpPost]
public async Task ProcessImage([FromBody]ProcessImagePayload payload)
{
    if(ModelState.IsValid)
    {
        var image = await _cognitiveService.ProcessImage(payload.Url);
        if (image != null)
        {
            await _imageRepository.CreateAsync(image);
        }
        else
        {
            Response.StatusCode = (int)HttpStatusCode.BadRequest;
        }
    }
    else
    {
        Response.StatusCode = (int)HttpStatusCode.BadRequest;
    }
}

Both repository and cognitive service are injected to the controller through controller constructor.

Below illustrates the solution structure:

Cognitive-Services_Computer-Vision-API_03

Now if you call the API and send an image URL through API payload, you will get the extracted information from the image in the form of a document in DocumentDB.

Cognitive-Services_Computer-Vision-API_04

You can find this example source code on github.