Building webpages with AI

A look into Sketch2Code, an open source project by Microsoft's AI Lab

Building webpages with AI

A look into Sketch2Code, an open source project by Microsoft's AI Lab

What seems to be the problem with building HTML pages?

I think it has to do with the amount of work one has to do for building just the structural components. With machine learning, we can easily identify these components like lines, text, and boxes, and create code for it. But web designers can do that more efficiently.

The problem is brainstorming designs. When having a lot of ideas in the mind, you can draw them on a paper or whiteboard in a jiffy, but the same is not true for code, which is a hefty amount of effort and not something designers want to do as it delays the design process.

I used Microsoft Frontpage for creating webpages in my school days and it was quite a relief for a newbie like me. I think the Sketch2Code is a step up from that, in the right direction.


Microsoft AI’s Sketch2Code is a solution that uses AI to transform a handwritten user interface design from a picture to valid HTML markup code. It is open source and you can play around with the tool as well as the code behind the tool as much as you like. You can find the code, process flow, architecture, and all the details on GitHub.

Process GIF

The web application is free and available to everyone, you can try out Sketch2Code right away on this website.

Home Page - AI.lab

Official Blog Post - Azure

Watch this short video to know more about the project - Microsoft Developers.


1. Detect Design Patterns

A Custom Vision Model trained to perform object recognition against HTML hand-drawn patterns is used to detect meaningful design elements into an image.

2. Understand handwritten text

Each detected element is passed through a Text Recognition Service to extract handwritten content.

3. Understand Structure

The information of the detected objects and their position inside the image is feed into an algorithm that generates the underlying structure.

4. Build HTML

A valid HTML is generated accordingly to the detected layout containing the detected design elements.

Workflow Image: Sketch2Code Workflow

It utilizes cloud computing with the Azure Platform via the Azure Cloud Platform and Azure Cloud AI Services.

Architecture Diagram Image: Sketch2Code Architecture Diagram

The model identifies the basic HTML elements such as buttons, labels and text boxes, allowing it to predict when those elements are present in any given image. It also can recognize handwritten text within the boxes to create a fully formed app or a webpage

The best part about the application is that the code is extractable not just in HTML but also in XAML and UWP (Universal Windows Platform). So you can get the codes from scratch and use it anyway to create your own applications.

Let’s test it

My test involves five images, all hand drawn on paper.


A page having an image, some text and two buttons.


See the generated HTML page.


A page having a lot of radio buttons.


See the generated HTML page.


A page having an image, text and some radio buttons. The tool performed pretty well on this one.


See the generated HTML page.


A page having a table.


See the generated HTML page.


A very arranged and nicely laid out page. The tool performed well on this one too.


See the generated HTML page.


The open source initiative by Microsoft’s AI.lab is a step in the right direction. And there is more to come our way, as more and more AI-based tools aid the web development processes. Sketch2Code shows promise as it works on almost any image and recognizes text even with not-so-good handwriting with enough accuracy.

One thing I liked the most about Sketch2Code is that it is fast, faster than I expected it to be. And in this case, you have nothing to do but applaud the people behind it and a potentially huge open source community that it invites today to work on it and to improve it further.