Tracking SharePoint User Properties with Microsoft Application Insights

Out-of-the-box page view tracking is dead simple to get working with every Web Analytic tool I’ve used, and Microsoft’s Application Insights available via the Azure platform is no different.

The Application Insights JavaScript code snippet is straightforward enough, albeit a little strange in that it is adding a script node to the DOM, but that’s fine:

var sdkInstance="appInsightsSDK";window[sdkInstance]="appInsights";var aiName=window[sdkInstance],aisdk=window[aiName]||function(e){function n(e){t[e]=function(){var n=arguments;t.queue.push(function(){t[e].apply(t,n)})}}var t={config:e};t.initialize=!0;var i=document,a=window;setTimeout(function(){var n=i.createElement("script");n.src=e.url||"",i.getElementsByTagName("script")[0].parentNode.appendChild(n)});try{t.cookie=i.cookie}catch(e){}t.queue=[],t.version=2;for(var r=["Event","PageView","Exception","Trace","DependencyData","Metric","PageViewPerformance"];r.length;)n("track"+r.pop());n("startTrackPage"),n("stopTrackPage");var s="Track"+r[0];if(n("start"+s),n("stop"+s),n("addTelemetryInitializer"),n("setAuthenticatedUserContext"),n("clearAuthenticatedUserContext"),n("flush"),t.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},!(!0===e.disableExceptionTracking||e.extensionConfig&&e.extensionConfig.ApplicationInsightsAnalytics&&!0===e.extensionConfig.ApplicationInsightsAnalytics.disableExceptionTracking)){n("_"+(r="onerror"));var o=a[r];a[r]=function(e,n,i,a,s){var c=o&&o(e,n,i,a,s);return!0!==c&&t["_"+r]({message:e,url:n,lineNumber:i,columnNumber:a,error:s}),c},e.autoExceptionInstrumented=!0}return t}(

This will get you tracking good metrics on any page view, which you can find best in the Logs section of the Application Insights “blade” in the Azure portal.

SharePoint scripting considerations

For SharePoint, if you want more, and you certainly do because the user data is very rich, it’s a little trickier to setup.

Script firing order is perhaps the least straightforward thing in SharePoint and perhaps this is changing in 2016(?), but usually, the developer is tasked with accounting for the disjoint manner in which scripts are loaded.

In most cases with SharePoint scripts, it’s basically a given that you’ll be layering your function calls under the arcane SP.SOD.executeFunc or SP.SOD.executeOrDelayUntilScriptLoaded functions to handle the unordered loading. For example:

SP.SOD.executeFunc('core.js', 'FollowingCallout', function() { FollowingCallout(); });

In the above, we use the script to declare a script dependency for the object that whats to be utilized in the subsequent function…MESSY!

SharePoint’s namespace challenge

The reason the code snippet is defining a script reference on the fly probably relates to the fact that it is trying to bypass the load order of other scripts and track the page view ASAP. But there’s a problem…

SharePoint uses a script dependency convention of namespaces which disallows JS objects to be defined if they already exist.

The problem for our script then is that both SharePoint and the script referenced by the tracking snippet define a Microsoft object, which is going to cause a collision if App Insights script does so first, and guess what, that’s what it’s trying to do.

The only way around this is to download the script being referenced in the snippet (in my case it is, and modify the definition of the Microsoft object which occurs early on in the script…

Before (with Microsoft)

 * Application Insights JavaScript SDK - Web, 2.4.4
 * Copyright (c) Microsoft and contributors. All rights reserved.
! function(e, t) {
    "object" == typeof exports && "undefined" != typeof module ? t(exports) : "function" == typeof define && define.amd ? define(["exports"], t) : t((e.Microsoft = e.Microsoft || {}, e.Microsoft.ApplicationInsights = {}))
}(this, function(e) {
// etc, etc...

After (changed to MSAzure)

 * Application Insights JavaScript SDK - Web, 2.4.4
 * Copyright (c) Microsoft and contributors. All rights reserved.
 * Modified by  on 
! function(e, t) {
    "object" == typeof exports && "undefined" != typeof module ? t(exports) : "function" == typeof define && define.amd ? define(["exports"], t) : t((e.MSAzure = e.MSAzure || {}, e.MSAzure.ApplicationInsights = {}))
}(this, function(e) {

Then put a copy of the script on the SharePoint server. I recommend putting it under

<YOUR SITE>/SiteAssets/js

Next, change the tracking code snippet to refer to the modified, uploaded script.

var sdkInstance="appInsightsSDK";window[sdkInstance]="appInsights";var aiName=window[sdkInstance],aisdk=window[aiName]||function(e){function n(e){t[e]=function(){var n=arguments;t.queue.push(function(){t[e].apply(t,n)})}}var t={config:e};t.initialize=!0;var i=document,a=window;setTimeout(function(){var n=i.createElement("script");n.src=e.url||"",i.getElementsByTagName("script")[0].parentNode.appendChild(n)});try{t.cookie=i.cookie}catch(e){}t.queue=[],t.version=2;for(var r=["Event","PageView","Exception","Trace","DependencyData","Metric","PageViewPerformance"];r.length;)n("track"+r.pop());n("startTrackPage"),n("stopTrackPage");var s="Track"+r[0];if(n("start"+s),n("stop"+s),n("addTelemetryInitializer"),n("setAuthenticatedUserContext"),n("clearAuthenticatedUserContext"),n("flush"),t.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},!(!0===e.disableExceptionTracking||e.extensionConfig&&e.extensionConfig.ApplicationInsightsAnalytics&&!0===e.extensionConfig.ApplicationInsightsAnalytics.disableExceptionTracking)){n("_"+(r="onerror"));var o=a[r];a[r]=function(e,n,i,a,s){var c=o&&o(e,n,i,a,s);return!0!==c&&t["_"+r]({message:e,url:n,lineNumber:i,columnNumber:a,error:s}),c},e.autoExceptionInstrumented=!0}return t}(

Getting and tracking SP User Properties

There’s a way around all this load-order chaos, you can track user properties along with page views. It can be accomplished with a call to SharePoint API endpoint GetMyProperties, with that response then provided to the trackPageView function. The endpoint can be accessed at the following address (keeping in mind that subsites may precede the /api/ part):


The following code snippet makes use of the Azure documentation sample, but with the following modification: I remove the page track call, and put it inside the return callback for a request to SharePoint API endpoint GetMyProperties.

var sdkInstance="appInsightsSDK";window[sdkInstance]="appInsights";var aiName=window[sdkInstance],aisdk=window[aiName]||function(e){function n(e){t[e]=function(){var n=arguments;t.queue.push(function(){t[e].apply(t,n)})}}var t={config:e};t.initialize=!0;var i=document,a=window;setTimeout(function(){var n=i.createElement("script");n.src=e.url||"",i.getElementsByTagName("script")[0].parentNode.appendChild(n)});try{t.cookie=i.cookie}catch(e){}t.queue=[],t.version=2;for(var r=["Event","PageView","Exception","Trace","DependencyData","Metric","PageViewPerformance"];r.length;)n("track"+r.pop());n("startTrackPage"),n("stopTrackPage");var s="Track"+r[0];if(n("start"+s),n("stop"+s),n("addTelemetryInitializer"),n("setAuthenticatedUserContext"),n("clearAuthenticatedUserContext"),n("flush"),t.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4},!(!0===e.disableExceptionTracking||e.extensionConfig&&e.extensionConfig.ApplicationInsightsAnalytics&&!0===e.extensionConfig.ApplicationInsightsAnalytics.disableExceptionTracking)){n(""+(r="onerror"));var o=a[r];a[r]=function(e,n,i,a,s){var c=o&&o(e,n,i,a,s);return!0!==c&&t""+r,c},e.autoExceptionInstrumented=!0}return t}({


    url: "/_api/SP.UserProfiles.PeopleManager/GetMyProperties",
     type: "GET",
    headers: { "accept": "application/json;odata=verbose" },
    success: function(data){
        var userinfo = {};
        userinfo["Department"] = true;
        userinfo["Office"] = true;
        var properties = data.d;
        if(properties.UserProfileProperties.results != null && properties.UserProfileProperties.results.length > 0)
            for(var p in properties.UserProfileProperties.results)
                userinfo[properties.UserProfileProperties.results[p].Key] = properties.UserProfileProperties.results[p].Value;
        appInsights.trackPageView({name:document.title, title:window.location.href, properties:userinfo});
    error: function (xhr) { 
        console.error(xhr.status + ': ' + xhr.statusText); 
    } //End error method
}); // end of Ajax call to GetMyProperties

…This approach does not depend on SharePoint script loading order. Even the jQuery use not necessary since it can just as well be a standard JavaScript XmlHttpRequest.

In the above code, the desired fields are called out, rather than simply copying all user fields (which you could do), but the point is that you usually don’t want to track personally-identifiable info (PII) with web stats. So in the above snippet, I’m tracking simple the department and the office location of the user by iterating through all available properties and copying those values out.

I recommend doing this even in you want PII because there can be a lot of useless junk in the user properties, and App Insights does change by data volume.

Which brings me to my final point…

What about cost?

A lot of people go with Google Analytics because you can do all this and it’s free. However, GA doesn’t offer all the capabilites of Application Insights, so that’s something to consider. Furthermore, a lot of times companies avoid Google services simply because they have a, shall we say, dodgy reputation regarding intellectual property. So companies will go with Adobe Analytics, WebTrends, or other offerings. And Microsoft’s Application Insights is a contender here.

With Azure, nearly every resource you spin up now has a fee attached. Currently App Insights presently charges $2.30/GB of tracked data.
The total cost is depends on the amount of website traffic you get and the content load of your SP site. And typically, SharePoint has a ton of fluffy libraries to load.

I’m going to call a medium trafficked site is one that sees about 1000 visits/day. Let’s pad that figure by rounding up to $3 for easy math, and 30 days in a month; so we have $3 * 30 * 12 = $1080/year using Application Insights.

That’s certainly competitive compared to other platforms. But it should be noted that the reporting aspect of Application Insights has more of a learning curve and probably wants a customer engagement to get a decent dashboard setup for analysis.

Up and running fast with PHP/Apache on Windows 10

With all the various server options available, such as NodeJS, IIS Express, .NET Core offerings, tons of cloud offerings, it’s important to remember that, on the internet today, PHP is still the king, and Apache is its throne. This is not to say or enter into the argument if its the best choice, but it is definitely the most economic choice, being entirely free.

So there’s little reason you shouldn’t have a copy of it to run on your local computer instead of browsing through CPanel on a cheap hosting service and doing a Los Angeles freeway’s worth of traffic uploading file modifications. Just run it locally! It’s not that difficult.

There are a lot of vendors who including handy packages such as MAMP, but these don’t gain you much besides the setup I’m about to outline. Also, the experience of configuring and running a local server is worth the time.

Download PHP and Apache

For this guide, I’m using Apache 2.4.41, and PHP 7.3.7. Get the zips for these first:

Installing PHP

Extract the php zip into a convenient folder on your local drive. C:\php is a safe bet.

Now, in your new php folder, copy and rename the php.ini-development to php.ini. This will cause the php initialization to use the default settings for development, which is good for localhost work.

In the php.ini file, set the extension_dir to your php folder’s ext subfolder:

Enable the following extensions which will be needed by MySQL, in case you plan to use that. (Note: curl is a very common network request tool, and gd2 is a graphics library which comes in handy for manipulating/drawing images).

Open Control Panel and Go to System and Security, and then choose System (Control Panel\System and Security\System).

Once there, click on Advanced system settings. In the popup, under the Advanced Tab, click the [Environment Variables…] button at the bottom:

Edit the Path variable in your Environment Variables:

Click the New button to add to the environment variable list, and add your php folder:

Click OK to both pop-up windows to apply the change.

Install Apache

Now extract the Apache zip to your local drive. The zip I link above has two items in the root, a folder called Apache24, and a readme_first.html file, so we can just extract the whole thing to the C drive.

Go to the extract folder (C:\Apache24), open the conf subfolder, and edit the file httpd.conf

In the httpd.conf file, modify the DirectoryIndex configuration. This will tell Apache that index.php is the first file to look for under a folder it is serving up. Notice I’m keeping the index.html file too, but you don’t have to:

Then, at the end of the file add this:

PHPIniDir "C:/php"
LoadModule php7_module "C:/php/php7apache2_4.dll"
AddType application/x-httpd-php .php

Run Apache

Open CMD and run httpd, which you will find under your bin sub folder of the Apache folder (C:\Apache24\bin\httpd.exe)

You may have to allow it throw Windows Firewall on first run:

When this is running, it will won’t write any more to the command line. And if you need to stop it, do CTRL+C. And you will then be prompted if you want to “Terminate Batch Job (Y/N)?”, and the answer is Y.

Both the start-up and shut-down can take a moment. Be patient especially with the shut-down and wait for the terminate prompt.

Testing PHP

The hello world you can use to test your new PHP/Apache setup is the following dirt simple php file:

echo "Hello World!";

Put this in a text file and save it as index.php in your Apache htdocs folder (C:\Apache24\htdocs\index.php)

Now when you open “localhost” in your browser, if everything installed okay, you should see This:

And the best part is, once you know your PC’s network IP address, you can visit your hosted content from your phone, tablet, other computers etc. Just run ipconfig from the Command Prompt, and look for this piece of information:

So in this case if you browse to http://<Your IPv4 Address>/ on one of these devices or another computer, you should see “Hello World!”.

Up and running fast with Mendix (Part 2)

In the first part, I covered the essentials to getting started with the rapid web application development platform known as Mendix.

We’re going to expand out the app we started in the first part, and explore new concepts with it, such as Attribute Associations, Data grid columns, and Microflows.

Let’s get going!

Orders module

It’s good to divide your app up into different functional areas using modules. Modules incur no performance hit or overhead, they are simply a grouping of the various functional parts of your app.

Each module will have its own domain model, but that doesn’t mean we can’t reference entities in a different module’s domain model, it simply helps us organize something complicated: our application.

To create a new module, right click on the Project node in the Project Explorer panel:

Name the module “Orders”.

Now let’s open up the Orders domain model and add an order entity with the following attributes:

Note on the OrderNumber attribute, we don’t need to create a unique id for each entity, Mendix does that behind the scenes, and manages all the associations that go along with it. However, in this case, it makes sense to store an additional ID, like a deli counter number so that we have a quick point of reference for our order.

In other cases you may have a unique identifier already as part of your process, maybe its an alphanumeric string. No problem, you can add it and put constraints around it, but Mendix will still store a unique numeric id behind the scenes in the application database.

Note: Mendix uses HSQL as its default database, however, this can be reconfigured to point to another kind of database such as Postgres, etc.

By using the AutoNumber data type, Mendix will create an auto-increment number field and increment it each time it creates a new instance of that entity.


We have an Item entity, and we have an Order entity, and we need to associate the two, but let’s think about this: we need another entity that records the association from the Order to the Item. We need OrderLine because an Order may or may not be associated to an Item, and there will likely be additional data such as Quantity, and other fields that qualify the order’s relationship with the Item.

So lets start with the simplest concept of an order line:

Associating two entities

Obviously, OrderLine is going to be associated to Order, but the direction matters because it implies the child->parent relationship. Since OrderLine is the child of Order, let’s create an association by hovering the mouse over the edge of OrderLine, then clicking and dragging to Order:

Double click on the name of the association, which defaults to <ChildEntity>_<ParentEntity> (in the case above, it’s adding a 2 because it knows I did this already), and the settings for the association will open in a dialog window:

The Multiplicity of the dialog is the most important part. Since we drew a child to parent relationship, it defaulted to “one Order object is associated with multiple OrderLine objects”, which is true.

The delete behaviors are also worth noting. How should Mendix handle related data when part of that data gets deleted?

Well, it’s usually a better practice not to delete database records. We can always flag them and exclude them, and this is a better way, but if you have a good use-case for actually deleting the records, you will need to consider Mendix’s association deletion behaviors.

OrderLine to Item

We need another association to our Item entity, which lives in our Inventory Module. This time, we are not looking for a one to many association. We are looking for a one-to-one association, because one order line will refer to a single item.

We can’t draw this association as we did above, but Mendix has another way to create associations; double click on the item go to the Associations tab, and click New:

The Select Entity dialog will pop up as soon as you click New, and you can navigate to your Item entity under the Inventory module.

When the new association appears in the Associations list, you can double click on its name and edit the Multiplicity to be [1 – 1]:

Now, it probably would be better if we did the association from Item to OrderLine, but for 1 to 1 associations this doesn’t matter as much, because, as you can see from the delete behaviors, Mendix can handle either parent->child scenario.

Our Orders domain model draws a line from OrderLine going off the screen because it is referencing another domain model’s entity:

Simple though it may be, this is all we need to get going on our UI.

A UI for taking orders

Add a new page to the Orders module, and call it Orders_Overview. As before, remove that junk content that it creates with:

Don’t forget to set a page title in the Page properties!

Learn from my shortcomings: if you remember nothing else, title your pages.

Add a Data grid widget to the region you emptied, and associate it to the Order entity:

Double click on the title area of the data grid widget (where it says [Unknown]), and go to the Data source tab to Select the Entity.

Mendix will again offer to populate the Data grid columns for you based on the attributes it finds in Orders, you will, of course, agree, and the result will look like this:

We don’t need all the search fields, just Order number and Status will do fine:

OrderLine’s New/Edit page

Same as before, we want to auto-generate a New/Edit page for order by right clicking on the New button and choosing “Generate page…”

Select from the layout from the dialog’s form options:

I like Form Vertical

Remove most fields from this page except for the OrderNumber AutoNumber (which can’t be edited anyway because its a read only field). And add a data grid.

In the data source tab, specify Over Association, and select DataLine, this will instruct the widget to only display OrderLines associated with the Order for which the page was opened (a new order or edited order).

When you click okay, and say Yes to the prompt about automatically filling the grid’s columns, you should see the single attribute belonging to OrderLine, Quantity:

Only one field in OrderLine

You’ll need to add columns related to Item, which are over the 1-to-1 association between OrderLine and Item. To do this, first add a new column by right clicking on the column header line, and the choosing Add column left or Add column right:

Right click on the new column and choose Select attribute, then you will prompted for the caption of the column and the source of information you want to add, which is specified as the Attribute (path).

Choose the Name attribute from Item, which will appear under the OrderLine_Item association

Go ahead and add the Price attribute to the data grid as well.

Okay, but where are the action buttons?

You’ll notice that, unlike the Data grid widget with a Database Entity source, the data by association doesn’t have any buttons out of the gate.

We will make them.

Right click on the gray “button bar” above the data grid column headers and select from context menu Add button > Create. Now our New button shows up:

On the New button, right click and choose our friend “Generate page…” and choose the page layout from the list of form options:

Now this result is interesting:

…well, not that interesting.

It created a form with Quantity from OrderLine, and Name from item, which is convenient. It chose the Name attribute because that’s the first string attribute it found on Item, and it knows it needs to give you a selection for Item because OrderLine has an association.

You should probably drag Item above Quantity though. Also, change the page layout in the Page Properties panel to be PopupLayout (Canvas widthxheight of 700×500 is good).

Now let’s build and run our program…but before we do, two last things…

1. Get rid of this Add button on our Orders page because it doesn’t know what to add, and it’s not all that useful.

2. Remember to add an option in the Project Navigation for our Orders page:

Build and run

When you run the app, you will be prompted to synchronize the database because you added new entities. There is no sane reason for not doing this.

With the app built, and running, click on our new Orders navigation option, and click the new button on the Orders page.

Then, on the New/Edit Order page, click the New button above the OrderLine data grid.

You are presented with our current inventory of items.

However, when you save the order, and return to the Orders page, you will see that there is some missing functionality: Total cost is not updating. Also, there’s no option to pay for the order.

We’ll return to total cost. Let’s tackle the pay option first using a Microflow.


If you’re a developer, let’s get some frustrating realities about Mendix out of the way:

  1. There is no code to write, only expressions in Microflows (similar to Excel formulas)
  2. You cannot enter a Microflow via text, you must graph it out via the Microflow editor.
  3. Microflows can get complicated, and you still have only the Microflow editor in which to manage them.

From the reactions I’ve seen from developers, Microflows are by far the bitterest pill to swallow. Part of the reason is simply that developers like to code, and Mendix basically doesn’t allow it (unless you are willing to step down into the Java layer, which sometimes is a good idea).

Just, put the apprehension aside and let’s get started.

In the project explorer add a new Microflow by right clicking on the Orders module and choosing “Add microflow…”

Call your microflow MF_PayForOrder

The convention I use and have seen used is to prefix microflows with MF_ for Microflow function (something is processed), or MC_ for Microflow calculation, which is used by attributes with calculated values, and also MV_ for microflow validation, which are used when we want to validate form data.

Inside the Microflow

All Microflows have a start point, one or more end points, and one or more Actions.

The Microflow will resemble any flow diagram you’ve ever graphed in Visio, the difference is it will actually be doing the actions listed in it.

For our current case, we need the microflow to action upon an order, and mark it as Paid.

This can only happen by receiving a parameter, which appears as the yellow shape indicated in the diagram above. Click on that and click again to set put it into the microflow, usually we put these above the start point:

As soon as you set it down, the Parameter Settings dialog will pop open and you will be asked to specify what Entity is being passed in. Choose Order, and you will see that the variable name is automatically populated (although it can be renamed), and it will be known throughout the logic flow using this name.

I’m going to rename it as OrderParam, just for clarity’s sake, and you will see that it shows both the variable name and the type below the parameter.

Next, click the little blue icon next to the arrow to create an action. Move the mouse to the middle of the microflow line and click again to insert the action. Then you will be prompted to choose the type of action:

There are LOTS of options for an Action, and it’s easy to get overwhelmed. In our case, we are going to choose Change Object, because we want to change the values of the entity’s attribute. For this we must select the variable the action will apply to:

You also need to consider if you want to store the change right away in the database, or if the change is part of a bigger set of changes, whereby it should be committed later.

Furthermore, when the change happens, do you need to update the UI right away? If so, choose Yes for Refresh in client.

Finally, specify the attributes to change (only Status here) by clicking the New button above the attribute change list field.

For the member attribute, select Status. Since Status is an enumeration, we need to specify one of its values (NotPaid, Paid, Cancelled). You can do so by typing the module name (Orders) and then typing a dot will bring up the data fields available within that module (constants, enumerations), and finally another dot will bring up the enumeration’s possible values.

Let’s set it to Paid:


In addition to setting this value, let’s commit the object right away (set Commit to Yes), and also refresh the UI (Refresh in client to Yes) so the order will reflect the change in status to the user immediately:

Note that the action will represent both of these settings in the top right corner, as an up arrow for commit, and a circling arrow for refresh in client.

Wiring up our Microflow

We have the logic to change the Order, but where is it going to be executed? Let’s add a [Paid] button to our Orders page:

Make sure you click on the button bar and not another element in order to get the right context menu

The button will appear with the label “Action”. Double click on the button to edit it, and in the settings of the dialog, set the caption to Paid, and go down to the on click event and select Call a microflow:

Some people have “Do nothing” as a default setting also

When you select Call a microflow, you will be immediately prompted to choose the Microflow from our modules. Find MF_PayForOrder:

That’s it, save all and Build/Run it!

Paying for orders

Try out the new button on the Orders page. When you select an order and click it, the status should change to Paid:

One step closer to a second career as a burger stand cashier!


In the next part I will cover Microflow Actions in more depth, showing you how to calculate the total of the orders, as well as how to constrain the data so that it only shows under certain conditions.

Stay tuned.

Up and running fast with Mendix web apps (Part 1)


Mendix is a rapid web application development platform that will likely be hated by developers and loved by the business.

It will be hated by developers because it effectively trivializes what we do. It will be loved by the business because it delivers on the promise of rapid applications that work, and capture key data which can then be reported on or integrated with other applications.

I am sympathetic to the complaints of developers. The business always wants easy answers to difficult problems, and Mendix will not always be the solution they want, and it will be difficult to communicate the why’s when it’s not.

Be aware though: if the business is interested in it, stonewalling this option may cause more problems than embracing it, because there are plenty of vendors who don’t share your concerns, and Mendix is predominantly a cloud-hosted offering.

That said, if you are running a business where your associates are building a Tower of Babel out of excel files, you would benefit greatly from this platform. It is excellent at book keeping and handing transaction records. Better yet, because it is actually code free, there will be very little maintenance to speak of.

Mendix is worth exploring, and this guide will take you through what it is and how it works.


Mendix is free to develop with, it only costs money when you push the app to a Mendix server. There are on-premise offerings but it’s unlikely many customers are using them.

To get stared, you need a Mendix account. I recommend you sign up with your business email in the event you like the platform and want to discuss licensing.

Installing the Business Modeler

Once you sign up for a Mendix account, you need to install the Business Modeler which is the application used to develop a new Mendix app.
Mendix has the modeler on a rapid release schedule and versions change frequently.

Apps developed in newer versions cannot be assumed to be backwards compatible, nor can older apps be expected to load in newer versions of the app. It is important then to get the right version of Mendix if you are working on an existing app.

If you are creating a new app, it is advised that you get the latest version of Mendix Business Modeler.

You can find all versions of Mendix here:

Logging in

If your installing Mendix because someone else invited you to collaborate on their project, you will see upon login a list of projects that you have been given access to. Otherwise, you will see the option to create a New App:

Creating a new app

Mendix has templates to start from, but let’s make it simple and choose the uninspiring Blank gray app icon to start.

Like buying the Brand-X cereal at the grocery store.

Next you will be prompted to name your app, and make sure you enable Online Services which allows for version control, and collaboration. The only time you would not want this is if you are just working on a personal project.

Note: saying no to this option still allows you to deploy to a cloud, it’s just not a dedicated cloud, and the resources allocated to it will be transient. It may go dormant during periods of no use, and then it will take a minute to spin back up when you access it again.

Inside the Business Modeler

With your app created, you will see the main project view, which includes several functional panels.

You can rearrange these for better space efficiency, for example, the central panel can use more space, so the project explorer and properties panel can shrink down, and you can drag Console and Debugger into the button so they have more horizontal room.

I like the following setup, and use the panel pins to hide areas when I don’t need to see them. The center, content panel is the most commonly used one, followed by Project Explorer, which will fill up quickly with content:

The Error tab is important to find because you cannot run your project if it any errors. Of course, Mendix will tell you that if you try.

Mendix Terminology (the basics)

  • Module – a grouping of functions (Microflows), content (pages, images, enumerations), and a database schema whose entities can be accessed from other module’s domain modules and Microflows
  • Microflow – a function that can perform actions on the database, on the UI and do basic logic
  • Page – a UI arrangement of input components and widgets, can be a full page or a pop-up dialog style page
  • Page Layout – a template for presentation of a page (for example, a standard web page or a popup page)
  • Snippet – a grouping of UI elements which can be used within a page. A good example is a snippet for a display of a record type that is to be repeated across the page depending on the number of records
  • Entity – a database table schema
  • Attribute – a database column, but using Mendix’s embellished data types
  • Association – a reference from one attribute to another Entity

Starter Project Modules

All projects are going to start out with the following content in the Project Explorer:

  • Project ‘YourProjectName’ node
    • Global project settings go here, including application Navigation, App Store Modules (external add-ins to extend Mendix), Security settings which govern what user roles can see what pages and Microflows (functions)
    • Security level begins in off state, which allows anyone to access, and is good for getting started, but most of the time you begin working on Security roles early on and will set this to Prototype / demo
  • System – holds some data schema related to users and logins; generally you leave this alone except to extend or change the application startup function (or Microflow, as Mendix calls functions).
  • MyFirstModule – has a Domain Model for quick start of database design (usually you want to create a new module though), a Home page for quick start layout of UI that will appear when the app runs, and an Images collection of pictures used in the stock Home page.

Using the Domain model editor

Always start with the data schema. You will be using entities, attributes, and associations. To add an entity, click the Entity button in the top left and a new entity will appear on the panel which you can drag around.

Try to contain your excitement.

Double click on this and you will be able to edit the properties of the entity:

  • Name – how the entity will be referred to throughout the app. Mendix likes capitalized values, and no spaces, such as MyEntity
  • Generalization – you can base entities on other entities which is similar to object oriented programming inheritance. This feature should be used conservatively because it can incur complexity and overhead quickly. Only use it when you need it.
  • Image – I’ve never used this.
  • Persistable – You don’t always want to keep your data, sometimes you want entities for temporary use. These will be managed by the app and stored in memory instead of in the database.
  • System members – fields that can be tracked automatically by Mendix. I have generally checked these off for main entities.
  • Documentation field – these will show up throughout the Mendix UI. If you populate them, Mendix can provide nice Javadoc documentation. It’s okay to leave this blank until your app matures.
  • The tabs appearing below these settings
    • Attributes – these are the fields of your Entity, they all need a type and have an optional default value, if the value is calculated and not stored, it will use a Microflow to determine its value. More often, you are not using calculated values but calculating them externally since it’s better app design.
    • Associations – are references to other entities. These are actually stored like attributes, but Mendix manages them separately because there are more conditions around them, such as how Mendix should behave when an entity with an association is deleted; should the associated records be deleted also? These cases can be specified.
    • Validation rules – you can specify that an attribute must be entered, but in most cases it is better to do this through UI functions.
    • Event handlers – events such as Create, Delete, Modify, and Rollback can trigger Microflows either before of after the event. This is very useful.
    • Indexes – this matters when you have a table with a lot of data

To add a new attribute, click on the New button in the Attributes tab:

We need an objective…

So we’ll create a simple order app to fill out this demo.

We already have an Item entity, we’ll need Order (which is linked to a user account), and also OrderLine, so we can have a many to one relationship between the Item and the Order, and we can also specify attributes like quantity.

But thinking ahead, it’s clear that items will need to be managed separately from orders. So lets make a new module for items. Right click on the Project node in the Project Explorer and choose Add module…

You will be prompted simply to provide a name, so we’ll call it ‘Inventory’.

The ‘Inventory’ module

The new Module will appear in the Explorer, and it will have a domain model ready to populate. You can cut and paste your Item entity into it.

We need a UI to manage the items; create them, modify them, and later on maybe have some other overview information.

Right click on the Inventory module and choose Add page…

Mendix presents a smorgasbord of templates to choose from, and that’s overwhelming. So let’s go with the standard option; choose List in the left-side navigation, and select List Default:

Why do we keep choosing the boring stuff?

The new page will appear with some default elements like a title, and a paragraph / subtitle. But before you get going, give the page a title in the properties panel! Let’s call it “Inventory Management”.

I always forget to do this!

You can quickly edit the page by giving it title text and a sub heading. Just click on those elements and start typing and it will overwrite the text.

Then delete that content within the box below this, and move your mouse over to the toolbox tab that appears on the right edge of the app, click or hover over this tab and the toolbox pain will appear, filled with (SO MANY) widgets to use on your page.

The page will have a bunch of filler content to start. Clear the stuff within the region below the page title and sub title.

We’ll use a simple out-of-the-box widget called Data grid, which gives a very standard table presentation of an Entity’s records. Drag it to the region you cleared.

You’ll also notice the number headings over the content regions. These are Bootstap related. Mendix outputs the page content into Bootstrap’s 12 column layout to facilitate responsive web design. This way, the content will wrap as needed, depending on the size of the screen.

Mendix really forces you to adhere to this by requiring columns add up to 12 and telling you when they don’t. But don’t be mad, responsive web design is the inescapable reality of the Internet.

The Widget’s Data Source

When you drag the Data grid into the box, it will show up in the most general way, with no idea what to display. Double click on it to set its Data source, click the [Select…] button next to the Entity (path) field, and choose the Entity in the Select dialog.

As it turns out, Item is our only option anyways.

Mendix will prompt you if you want to auto-populate the table columns with the fields of Item. Yes, that’s helpful!

I also want to automatically fill the contents of my tax forms…

Now the data grid appears with columns found in Item:

OutOfStock attribute is out of screenshot

The New/Edit page

It’s pretty obvious Mendix has the ability to show a bunch of data from a table in this data grid widget, big whoop. But where the platform starts to shine is how simple it makes creation of pages to fill out that data.

Let’s setup its New/Edit page for Item. All we need to do is right-click on the New button that appears in the data grid, and choose Generate page…

This will give you the page dialog again, this time showing only forms in the left-side navigation, and we are going to choose Form Vertical for a nice vertical display of each field:

Notice that the Page name is prepopulated to be <EntityName>_NewEdit

This one action will:

  • Create the new/edit page that can save the record to the database
  • Wire up the New button to launch it in the web app
  • Wire up the Edit button to launch it in the web app for an existing record
  • No code

Take a look at the page properties by clicking outside the actionable area of the page (either the darkened region above or the blank region to the right). Otherwise you will may be looking at the properties of a selected element.

With the page properties showing, look at the Layout field.

The page layout it’s using (Atlas_Default) will cause the web application to open a new page when the item is created or edited.

It would be a better UX to just have a popup since Item is not a complicated entity with many fields and logic. If you click on the page layout’s value, you can select a popup layout.

Find PopupLayout in the Select dialog:

Use the Filter search to make it easier to find

Also, edit the Canvas width and height properties so that our popup is reasonably sized. The popup layout will scroll the content if needed, but we’ll be fine with the minimum Mendix popup layout size: 600w x 500h.

Save All, and we can test run the app!

Building and running(?)

Go to the Run button at the top center of the app menu, and select Run Locally. The project begins to compile…and finds some errors.

You can look at these in the Errors tab that appears at the bottom left of the application. Double click on the error and it shows you the context.

Green Add button doesn’t know what to do with itself.

One problem is that the Add button we left on our Inventory Management page doesn’t know what to add. We have to tell it the entity. Right click on it and choose Select entity… and choose Item from our Inventory module. The other error is it doesn’t know what page to use for adding a new Item. Conveniently, we can use our Item_NewEdit page which Mendix helpfully generated. Find it under the Inventory module and select it.

Now you will see that all errors are gone. Let’s try running it now.

Go to the top of the business modeler, in the middle of the menu select the drop down next to the Run button and choose Run Locally from the drop down:

The other Run option sends the build to the Mendix cloud

This will cause Mendix to build the application as a local Apache web server, and initialize that server so we can see it in our browser on localhost.

Note: the other Run option sends our build to the Mendix development cloud to be hosted, which is also fine because we don’t have to pay for hosting until we want to deploy our app to a dedicated hosting node.

Your app should now build with no issue, but you will need to wait until the web server is up and running (Mendix builds and runs an Apache server for each app), and it takes time.

Also, the first time you successfully run your app, Mendix will ask if you want to create a default database for it. Of course you do!

Filed under questions you probably shouldn’t say ‘No’ to.

After a little more building, it will prompt you that it’s ready to show itself. You’ll know because it will tell you:

There are options for Viewing so that you can see your app behaving as it would on a mobile-sized screen, but usually we’re going with the Responsive Browser:

This will launch you browser and show your running app.

First run of your app

Well…this sucks.

We need to make our new page available.

Navigation needs your new page

We have a page to create and edit items, or so it appears we do, but the user currently has no way of getting to your Inventory Management page.

It must be added in the Navigation node under the Project node:

On this screen click New item under menu, and add the following, selecting your Inventory Management page:

Now our Navigation Items list will look like this:

Our inventory must be of the edible variety…

Save all changes, and build/run it again by click Run locally.

Using our Inventory Management screen

Now when you run the app, your new navigation item will appear. And when you click it, the page will:

It looks good for zero custom design work, and it works.

Click the green Add button or the New button to add an item and the Item New/Edit page pops up, ready for settings:

The ubiquitous stock of hamburger inventory

Set the Name, set the Price, set the In/Out-off Stock, click Save and our item is added to our inventory: 

Let’s add more items:

It’s all about the beverage sales.

To edit an item, click it and click the Edit button:

AKA “The Bladder Buster”

Recap of Part 1

You can probably imagine easily some problems with the setup as is.

  • First of all, only certain people should ever see this page (that’s were security roles come in).
  • Secondly, what if a lazy employee adds a duplicate product (validation rules)
  • Thirdly, what about tracking changes made to the product (more app/data design needed)

…And so on. These are important questions that you should be asking in order to make a better app. But we’re not going to cover them here.

In the next part, I will build the Ordering functionality of our app which will allow you at the very least to handle the IT needs of your fledgling burger business.

We will cover associations, Microflows, nested data widgets, and data source constraints.

Go to Part 2 >>

Disk Space Freeing Tips For Data Survivalists

It doesn’t take a lot of development to end up with a critically low amount of disk space. Depending on what you develop, you could face this scenario in a matter of months, and then the grim reality of living day-to-day with low space sets in, and you become a sort of data-survivor; making drastic decisions because you have know choice.

I’ve been operating in this mode for months now, because I have a phenomenally souped-up laptop with an Achilles heal hard drive of 250GB. Worse yet, I can’t upgrade.

So here are my best tips on how to live another day when you have no options but jettison ballast…

WinDirStat: The Last Honest App

You probably already have this little program, WinDirStat, with the world’s ugliest icon, and Pac-Man progress bars. And it is the only tool I trust to tell me the reality of my hard disk.

An icon only a developer could love.

Run it on the offending drive, and after a reasonable time, it will produce another ugly graphic which shows you a “colorful” tree map of your disk usage. You will spot quickly the worst-offending files because they will glare at you in alarmingly discordant color swatches, which hurt the eyes; and they should, because those files are hurting your drive.

But here’s the rub, depending on your company’s policies and Windows restrictions, there are some sources of pain that cannot be remedied. One glaring example is pagefile.sys, which is used for virtual memory.

Guess which file is pagefile.sys?

That said, windirstat shows you were to prune. You can see your disk contents arranged in blocks and when you click on them, it will show you where in the labirynth of folders the file exists.

There are other good options, but I recommend this utility to gain the necessary oversight of your starving hard drive’s vital signs.

When you’ve decided you can live without Outlook…

Let’s face it, without more space, your PC has no outlook. And when push comes to shove, you may be looking at that OST file under your Outlook directory with eyes like carving knives.

C:\Users\<your username>\AppData\Local\Microsoft\Outlook\HumongousFile.ost

Because my company uses Office365, I am able to use the web client. The web client isn’t great (ss of writing this, I can’t setup an email signature in it), but like I said, I have to free up space, and having every email I’ve ever sent or received sitting on my hard drive is not an option.

So I learned to use the web client, and it’s actually not too bad. The new web integrations for office make previewing a file much better, and you can also open the file in the local app (Word, Excel, PowerPoint, etc) with a single click from the preview.

Once you decide to eliminate the OST file upon which Outlook relies, you need to be aware of one trick Outlook has up its sleeve…

“Clever girl…”

Sure, go ahead, delete the file. Boom! Gigs back. Hooray! Much rejoicing! But then, next day, it’s back, and you’re in worse shape now because you celebrated by installing a bunch of stupid applications, didn’t you?

Still not dead.

The secret of OST’s regenerative ability is Lync (which is now called Skype for Business). There’s an option in Skype settings which will cause the OST to be recreated if it is found missing.

Disable that like so:

Exit out of Skype, delete the OST, restart Skype. OST stays gone now. Hopefully. [plot left open for sequel]

Hibernation is for bears.

Hibernation mode is a handy windows feature that allows your computer to go into a powered-off state, and then, when the season is right, awake with all your programs running as just like before, and then maybe tear open a few campers’ SUVs.

How does it do this? By creating a file as big as your PC’s total RAM, which is intolerably large!

But you can shut it off and get all of that bear-sized space back. Run CMD as Administrator, and execute the following command:

powercfg.exe /hibernate off
I can almost fit in this hard disk.

Not all of OneDrive needs to be synchronized.

The beauty of cloud storage is that it has so much space to offer. I use OneDrive, and I find it practical to have a couple folders kept in sync for convenience.

Make sure that you aren’t syncing the entirety of your OneDrive folder. You have to go into OneDrive settings and specify what folders it should keep in sync. Otherwise, you have gained zero advantage to using cloud storage.

Better to be embarrassed by poverty than by riches.

Additionally, the folders you do keep synced you have to practice good data hygiene on. So everything I discuss in the following tip is relevant to your cloud synced folder…

Now, about your Downloads folder…

I know, you’re going to tell me you’ve already deleted the big stuff out of here. But wait a minute, here’s an interesting theory for you to reflect upon: the Downloads folder is a window into your aptitude or lack thereof for managing your hard drive.

Psychobable, you say? Go to the folder. Take a look at the contents you decided not to delete and ask yourself. Why are they still there?

If you have some application install files that are hard to find, put them up on OneDrive or some other cloud storage. If the install file isn’t hard to find and you can with minimal effort download it again, delete the stupid thing. Don’t hoard your data!

I had 4 copies of Filezilla when I last checked. Is Filezilla going to vanish from the Internet? Is it going to become a SaaS (perpetually paid for program) like the Adobe Scrooge Suite?

No it’s not. Delete it.

Those documents you downloaded from Gmail. Are they yours? Then put them in a different folder. Are they from someone else? Then delete them, because you can very easily find the email they were attached to and redownload them IF NEEDED.

Survival is about need.

Need. Survival is about need. There is no room for convenience. Either move the document to a better place, or delete it.

Data hygene starts with the Downloads folder, the way health starts with diet. The Download folder is like your mouth. Files are like food. If you need the food, store it, if you don’t…well, you know what a body does.

Remove old versions of programs, if you can.

In most cases, you don’t need old versions of programs. Sometimes you do though.

Generally, this problem is most accute for developers because we don’t always have the option to upgrade our projects to a later version of an IDE, and IDEs are the fattest of resource hogs in the application pen. But if you’re still hanging onto VS2010, and you haven’t used it in the last 6 months. It’s time to make the survivor’s choice.

Here’s a more challenging scenario: VS 2015 and VS 2017. In some cases I’ve seen, there are projects I simply cannot port, usually this happens on team projects that are archived in TFS.

If you can’t remove it, don’t. But here’s what I can do. I can remove those VS2015 projects, because…they’re on TFS. Which brings me to the next tip…

Delete projects that are *safely* under source control.

Man, the whole point of Git, TFS, or (heaven help you) SVN, is that you transfer authority of the files to a central repository. It’s safe! The reason it exists is so you don’t have to data hoard versions of your code. Including the latest versions. If you’re not actively working on the project, and the latest changes are checked in, simply delete the local copy.

Yes, it’s a hassle to pull it back down, but this goes back to data hygiene. It’s also a hassle to make your bed in the morning, and shave, and keep your car passenger worthy.

Strangely enough, it’s an upside to not have a choice. You need space. You need to incur some inconveniences to get it. Take a chainsaw to your file tree…for the good of the tree!

Some trees deserve the chainsaw.

Android Studio hates your hard disk.

AVD Block Party

I like Android Studio, but Android Studio hates my PC.

Besides being a massive pile of bytes, Android Studio spawns virtual device images for emulation purposes, and a single instance of a android device image takes up an ever more expansive amount of space.

AVD Block Party Location. Be there, and be square-ish.

Apparently, Google’s Android team has found it most conducive to development to appropriate every byte a device could possibly utilize in order to provide you an emulated version.

Now, I won’t be as drastic as some developers who flat out refuse to use the emulator, I don’t think that’s reasonable (although it is arguably better practice to use a real device). I also understand that some images you may have configured specially because you were building for a non-common device. So take notes, or screenshots even, of those configuration settings, and remove the images until you need them again.

Nervous Database is logging itself.

If you work with Microsoft SQL or MySQL, be aware that both of them will produce snapshots of their databases under certain circumstances, such as an application crash.

These files are generally small, but could grow into a big problem. And you don’t need them.

For Microsoft SQL, the files are named SqlDump<some number>.mdmp

For MySQL, the files are named mysql-bin.<some number>


Some programs will save snapshots of files in progress when they crash. This is not a uniform practice, but I’ve spotted the place where most apps are putting their “crash dumps” in a folder called “CrashDumps” (in the local app data folder).


Presuming you haven’t just experienced a program crash, you can clean this folder out. Depending on which program crashed, it could be claiming a sizable chunk of your drive.

The bigger they are the larger they crash.

Windows tries to help.

I’m assuming, at this point, that you have muscle memory for launching Disk Cleanup utility. And hopefully, you have admin privileges over your computer to get the bonus cache of junk files. This is usually slim pickings though. But just in case, you should know about Windows’ ‘Disk Cleanup’ app, and also setup your Disk Defragmenter to run regularly.

We take what we can get.

WinSxS (Windows 10)

WinSxS folder on Windows 10

Windows SxS (Side-by-Side) is an unhappy compromise of an OS caught between 32 and 64 bits, and a library of legacy to boot.

On Windows 10, you will quickly notice this teaming colony of files, small but many.

There are lots of tips for slimming down this multi-stomached digestive system, but the quick one is to run the following from the command line (with Admin privelages) which will remove cached components that have been updated to newer versions…

Dism.exe /online /Cleanup-Image /StartComponentCleanup 

Edge Cases: Other Strange Space Eaters

Here’s where I alienate the general audience and report some strange edge cases that being I have had to deal with as a disk space survivalist.

The most unnerving one is this: at one point in my efforts to save space, I came across wave files that were massive. Surprised to find them, I listened to one, and honestly, it sounded like me using my computer. It was ambient, like a room, and had occasional sounds of life. After a few seconds, I was sufficiently creeped out, I stopped listening.

Maybe it’s a Cortana thing, or some other mic resource gone awry. All I really know is that my computer didn’t need gigantic WAV files in order to improve my user experience.

In the rare case that you also have these files, you know that they reside in the Windows\Temp folder.

To handle them, I simply wrote a 1 line bat file to delete them and scheduled it to run on logon and every 2 hours.

del /f "C:\Windows\Temp\*.wav"

Easy M4A/AIFF/WMA/ETC to Mp3 conversion with FFmpeg

Making a simple mp3 converter for various audio files

FFmpeg was created as a Linux tool to facilitate movie format decoding. It was later wrapped into the mplayer project, where it remained a stand-alone exe used by the mplayer libraries.

While MPlayer is no more, FFmpeg continues to find its way into countless media production applications, since it does such a good job and offers so many conversion options. If you have a media conversion tool, there’s a good chance FFmpeg is sitting somewhere in the application’s bin folder.

Best of all, it is still available as a free download, and I wouldn’t recommend any other converter for the reason stated above; most converters use FFmpeg.

A lot of WordPress sites cannot recieve the audio formats aiff and m4a which are sometimes produced by media programs. So FFMPEG can help you convert it to the smaller, ultra-portable MP3 format.

The FFMPEG exe handles a sea of arguments, but for simplicity sake I’m just doing the obvious in the following one-liner bat file, which will take an audio file, and convert it to MP3 (128k sample rate) using for output the same file name as passed in with the .mp3 extension instead of the source extension. It will create the output file in the same location as the input. So once you have this file bat file created, you can just drag audio files onto it and see the output file created in the same location.

  • Take the following, and edit the “LOCATION_OF_YOUR_FFMPEG_EXE” part to be the path of where the ffmpeg exe ended up after your download.
  • Save the file to your desktop, or somewhere convenient for dropping files.
  • Drag an audio file from any folder onto the bat file and watch for an MP3 version of it to be created in that same folder.
C:\LOCATION_OF_YOUR_FFMPEG_EXE\ffmpeg.exe -i %1 -acodec libmp3lame -ab 128k "%~n1.mp3"

Devloping and distributing an Alexa Skill (Part 2)

This guide continues from Part 1 where we review how to set up the app.

Now we’re ready to code the Lambda function that will actually perform the Skill’s dialogue with a user.

The Lamda Function code editor is pretty decent. With it, you can add and edit files easily, and they will store to the server side.

I like to use Vivaldi browser, but with it, I was seeing this error message a lot:

“Failed to write ‘index.js’. Please try again.”

When I switched to Firefox, I was not seeing this. I have also seen others report this issue online. One thing I can tell you is that I currently have very little space on my hard drive, and I think that could be a factor, because when I first started developing using Vivaldi, it was not an issue.

Anyways, to access your code, you need to click on the the Lambda Function name in the designer view:

Click on the function name in the designer to reveal the code in the code editor (below the designer)

Then scroll down and see the code:

Let’s make some changes to the code above, and then we’ll test it. I have edited it to look like the following:

/* eslint-disable  func-names / / eslint quote-props: ["error", "consistent"]*/
'use strict';
const Alexa = require('alexa-sdk');
//TODO: The items below this comment need your attention.
//Replace with your app ID (OPTIONAL). You can find this value at the top of your skill's page on
const APP_ID = undefined;
const SKILL_NAME = 'How Awesome';
const HELP_MESSAGE = 'To rate how awesome something is, you can say "Sandwiches" are a 2 of of 10.';
const PROMPT = "Tell me what's awesome?"
const HELP_REPROMPT = "I couldn't understand what you said. " + HELP_MESSAGE + " " + PROMPT;

const handlers = {
'LaunchRequest': function () {
// ------------------------------------------------
// ------------------------------------------------
'HowAwesome': function () {
// ------------------
// ------------------
this.response.speak("How Awesome");
'AMAZON.HelpIntent': function () {
'AMAZON.CancelIntent': function () {
'AMAZON.StopIntent': function () {
'SessionEndedRequest': function(){
// ------------------------------------------------------
// ------------------------------------------------------

exports.handler = function (event, context, callback) {
const alexa = Alexa.handler(event, context, callback);
alexa.APP_ID = APP_ID;

The above code can be tested after we configure the code. To do so, choose Configure test events from the “Select a test event” drop down:

In the pop up window, we choose Create new test event, and select template Amazon Alexa Intent GetNewFact, then we’ll rename it, to HowAwesomeTest1:

Before you hit create, modify the portion of the code where the request object is defined to be the following:

"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.1234",
"timestamp": "2016-10-27T21:06:28Z",
"locale": "en-US",
"intent": {
"name": "HowAwesome",
"slots": {
"Thing": { "name": "Thing", "value": "sandwiches" },
"OneOutOfTen": { "name": "OneOutOfTen", "value": 10}

We have changed the name of the intent to our HowAwesome Intent, and provided data in the slots object.

The slot fields follow the convention shown above, the Name of the slot, followed by an object with a redundant name property and a value. The ID will also appear in here if applied in the Skill’s slot configuration.

Now click the Create button and you will see the drop down populates with our HowAwesomeTest1. We want to Save the function, and then test it.

When you run the test, you should see results like the following:

Results will show in green box if successful, or light red if failed. The Details will show the speech output JSON, and lower down, the log output from using console.log. So for the result JSON:

"version": "1.0",
"response": {
"shouldEndSession": true,
"outputSpeech": {
"type": "SSML",
"ssml": " How Awesome
"sessionAttributes": {},
"userAgent": "ask-nodejs/1.0.25 Node/v6.10.3"

And for the log output:

START RequestId: 06bd4a46-0f8b-11e9-9766-8f063ff0f177 Version: $LATEST
2019-01-03T19:09:00.465Z 06bd4a46-0f8b-11e9-9766-8f063ff0f177 Warning: Application ID is not set
2019-01-03T19:09:00.484Z 06bd4a46-0f8b-11e9-9766-8f063ff0f177 { Thing: { name: 'Thing', value: 'sandwiches' },
OneOutOfTen: { name: 'OneOutOfTen', value: 10 } }
END RequestId: 06bd4a46-0f8b-11e9-9766-8f063ff0f177
REPORT RequestId: 06bd4a46-0f8b-11e9-9766-8f063ff0f177 Duration: 109.37 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 32 MB

You can see that we have outputted to the logs the JSON of the slots field which comes into play when the platform calls the Lambda function from the Alexa Skill after matching an Utterance, but here we are explicitly providing it.

Lets change the HowAwesome Intent to something more…functional. Try this:

'HowAwesome': function () {
   // ------------------
// ------------------
if(this.event.request.intent.slots && this.event.request.intent.slots.Thing && this.event.request.intent.slots.Thing.value)
var thing = this.event.request.intent.slots.Thing.value;
  if(this.event.request.intent.slots.OneOutOfTen && this.event.request.intent.slots.OneOutOfTen.value)
var rating = this.event.request.intent.slots.OneOutOfTen.value;
if(this.rating < 1 || this.rating > 10)
this.response.speak("You can only rate something as a 1 out of 10 for numeric awesomeness.");
this.response.speak("You have rated " + thing + " as " + rating + " out of 10 for awesomeness.");
else if(this.event.request.intent.slots.AwesomeType && this.event.request.intent.slots.AwesomeType.value)
var atype = this.event.request.intent.slots.AwesomeType.value;
var aid =;
this.response.speak("You have rated " + thing + " as " + atype + " awesome, " + aid + " out of 10.");

This code will receive a value from either of our slots; OneOutOfTen, and AwesomeType. The response from Alexa will differ depending on which slot is populated, which is a matter of which Utterance is matched.

Let’s create a new test for our new Intent code. Choose the Configure test events option from the drop down again, and using the first test we created as a template, change the following part (the slot values):

Create the new test, save the Lamda Function, and run the new test. You should see the following JSON output:

"version": "1.0",
"response": {
"shouldEndSession": true,
"outputSpeech": {
"type": "SSML",
"ssml": " You have rated sandwiches as pretty awesome, 6 out of 10.
"sessionAttributes": {},
"userAgent": "ask-nodejs/1.0.25 Node/v6.10.3"

So you can see that it’s going to the second logic branch for the AwesomeType and preparing speech that contains both the type of awesomeness and the number it maps to.

Next we’ll test our Lambda Function using the Alexa Skill itself in the Alexa Developer Console. (coming soon…)

Buy Me A Coffee
If this guide was helpful, please consider a little tip

Developing and distributing an Alexa Skill (Part 1)

Despite my protests, my wife went ahead and bought an Alexa Echo for the house. I didn’t like giving Amazon access to so much personal data, and still don’t, but now, having used the device for a month or so, it has become helpful.

After you get over the unsavory way in which Alexa tries to sell you amazon stuff, and the frustration of a stuttered or hesitant sentence not being recognized, the utility of having a live mic that can process inquiries in real time is undoubtedly useful, and it’s also fun (my son likes asking it what a blue whale sounds like).

Alexa is at its best when used simply. Single queries, not conversations, quips, not chapters, reading, not writing.

Amazon has made the development pretty easy, but unfortunately, it is fragmented across multiple offerings which was the main issue I had approaching the platform.

The utility I wanted early on was the ability to ask for verses from the New American Bible, and not the King James version which is missing several books. You can take a look at the outcome here.

And what follows is the path I took to develop my Alexa Skill that can do lookup a verse in the New American Bible (Revised Edition)…

Creating a new skill

First, go to and create an account.

Once logged-in, visit the Developer Console. There you will see the Alexa option in the navbar, and choose Alexa Skills Kit.


Under the Alexa Skills listing, click the Create Skill button:


In the next menu you can choose a sample app to start with. I would definitely recommend choosing the Fact Skill as it has the minimal point of entry. As far as I can tell, the biggest difference between them besides the source code that it creates with, is the Intent settings that are also created.


This will land you on the Skill’s dashboard page:


On the left side navigation, you will see the elements that compose your skill, from a meta-data standpoint. All of these boil down the JSON that appears in the JSON Editor, which comes in handy when you need to edit some of the Utterances, or if you want to quickly copy a Skill’s settings into another skill.

Also in the sidebar (not shown in screen above), is the Endpoint configuration link. This is the URL where Amazon’s servers will look for your app’s code to handle the launch request.

Where does the skill app code go?

The skill has to point to a URL which is hosted either by AWS or on your own server. That URL needs to handle specific requests.

I used the AWS Lambda function option, which the Amazon developer site encourages, for a variety of reasons; most notably is that the free-tier offers a lot of usage for no cost. However, you can point it to a different URL where your server is hosted, or an endpoint to do local development. I explored this, but ended up going with the Lambda option, because it removed some of the unknowns, and I wanted to get up and running with least amount of effort.

System and custom Intents

When you develop a skill, you will need to handle a few system level Intents. These are like system events that occur depending on how your app is used. They can get pretty advanced, such as doing media playback, but that’s out of scope for us here.

The Fact Skill creates most of the Intents you need to handle, and includes one custom Intent to drive the core function of the app: to speak a random fact.

In addition to your own custom Intents, the Intents you will need to handle in your app are

  • AMAZON.CancelIntent
  • AMAZON.HelpIntent
  • AMAZON.StopIntent
  • LaunchRequest
  • SessionEndedRequest

Notice that the last two listed do not appear in the sidebar of the out-of-the-box Intents for the Fact Skill template. But they will need to be handled within the app.

Invocation of the skill

The invocation is how Alexa will launch your app. This is not so straightforward.

Alexa will launch your app given certain launch phrases such as “open”, “ask”, “start”, “launch”, etc, and the Invocation Name is the phrase that will map to your Skill. So if your skill is named “My Awesome Skill”, your invocation name could be simply “my awesome skill”, or you can make it different from the Skill name, such as “the awesomest skill,” whereby the full invocation would be

Alexa, open the awesomest skill.

Skill Utterances and Slots

When you want to try your skill, you need to launch it first. In the case of the out-of-the-box Fact Skill, the app doesn’t have need for an interaction, it simply blurts out one of a bunch of space facts.

When you develop, you should prompt the user for input when they launch the app. This input will then be matched against the Utterances you’ve added to the Skill. This will then be fed into the Intent that the matched Utterance maps to, and will be available as an event “slot” parameter.

If you click on the GetNewFactIntent, you are taken to a page where you can both rename the Intent (or you can just delete it create a new one), and then populate the Utterances for it.


In our case, we are going to create a couple ways to state how awesome something is.

As you type in the new Sample Utterance field, you can use the curly brackets to create a slot, which is the variable that will be populated by the user when he interacts with your skill. The utterance that gets matched and populated depends on the formation of the spoken sentence.

Let’s create a one to ten utterance with the following slot

{OneOutOfTen} out of ten

I set the slot type to AMAZON.NUMBER so that it will be populated with an integer value.

I’m gonna make two ways of assessing awesomeness of a thing, the first is a number from 1 to 10 using the AMAZON.NUMBER slot type, the second is a custom slot type which I will call AwesomeType, and it will allow me to make a list of phrases which map to 1 to 10 numbers. Finally, I will create a slot for the thing which is being assessed using the AMAZON.SearchQuery which will use Alexa’s voice recognition to resolve a string phrase.

First, to create the custom slot type, click the add button next to the Slot Types menu item in the left sidebar.

Fill in a name for the type (no spaces):


Next you can add values in one of two ways:

  1. by entering them one at a time, or
  2. by using the bulk import, which receives a CSV file.

I prefer the second option, whereby the developer can load in a tab delimited file (without headers). The first column is the value which Alexa will use speech recognition to match, the second is the ID which is what will be sent to your app’s function (the Name is also sent in), and any following column is a synonym which Alexa will use alternately to map to the same ID (or value if no ID is provided). So the CSV would look like this:

Value1  ID1     Synonym1        Synonym2        (etc)

With the field populated, my slot type looks like this:


So Alexa will look for one of these values to populate the AwesomeType of one of my Utterances, which are as follows:



  • {Thing} will use Amazon speech recognition to deliver a string phrase
  • {OneOutOfTen} will resolve to an integer value
  • {AwesomeType} will resolve to one of my field options which will include an ID that I can parse into an integer

So I have one Custom Intent and 5 built-in ones, one of which is optional; AMAZON.FallbackIntent, which can be used in the event that Alexa cannot understand what you said. It’s probably a good idea to handle this with at least a “Sorry, I couldn’t understand that kind of awesomeness” message.

Make sure to save the model using the Save Model button at the top of the page.

Creating an AWS Lambda function

Save the model, and then hop to and sign up for an account. Once you’ve done that, you can create a Lambda function which will be used to host your app.

Under the AWS Console, click on the services menu on the top navbar and select Lambda from the drop down menu:


On the listing for functions, click Create Function. Then on the Create Function page, select Blueprints, and then enter “alexa” in the search filter and choose alexa-skill-kit-sdk-factskill from the results, and click Configure:


On the next screen you will enter the Basic Information of the Lambda; the name of the Lambda function (no spaces), and also choose a Role (which has the permissions for which the function to be called). I just click through which looks like this:


Allow the IAM Role, and then you will return to the Basic Information screen:


Syncing the Lamba function with the Alexa Skill

After this, you will be taken to the Lambda editing page, where you will see printed at the top, the ARN code of the Lambda function. You will need to copy this and enter it into the Endpoint screen of your Alexa Skill later:


On this edit page, you will need to add a Alexa Skills Kit trigger from the Add Triggers menu. You will see that when you do this, you will be prompted to configure the trigger using the Skill ID of your Alexa Skill:


So you need to have open both the Alexa Developer Console and the AWS Lambda Editor open to do this next part.

  1. In the Alexa Developer Console, click the Endpoint option in the leftside navigation, and copy the Skill ID you see listed and
  2. Paste it in the AWS Lambda Editor‘s trigger configuration field, and click Add there. Then Save it using the Save button near the ARN code at the top of the page.
  3. In the AWS Lambda Editor, copy the ARN code that appears at the top of the page, and
  4. Paste it in the Alexa Developer Console‘s Endpoint configuration form’s Default Region field, then save it using the Save Endpoint button that appears at the top of the page. You should see a green Skill Manifest Saved Successfully toast message appear.

Note on Default Region: I’ve been using East 1 (N. Virginia). I don’t know if choosing another region has any impact but I have read some users complaining about region issues, possibly between resources, not sure, but I stuck to East 1.

Back to the AWS Lambda Editor, you now need to code your app…!

I will tackle this in Part 2.

Every issue, quirk, and glitch I ran into while developing a SharePoint Add-In

My job can be heavy with the SharePoint Online, and I do like to tinker around on it using the REST API and Script Editor web parts, and you can do cool stuff just with those.

But as soon as you start specking out workflows, it’s probably time to elevate to Apps or Add-Ins.

I can’t figure out what SharePoint wants to call them, but when you go to develop it in Visual Studio, you’ll be picking the SharePoint add-in project:


I set out to create a polling add-in because I already had it working as a REST API script web part, so I figured, why not package the thing?

But the simplicity with which you can build out your app via Visual Studio’s bountiful UI is almost wholly negated by the weird ticks will will encounter as you develop.

I will try to catalog them here…

Extra, un-asked for Features

The first issue I ran into was new Feature nodes would be added depending on what I deleted from and added to the project. Visual Studio would seem to lose track of the Feature1 that it created by default, and begin to add new items to a second or even 3rd Feature node.

When that happens, delete the extra feature nodes, and take a look at Feature1 to be sure it is including all the relevant items of your project (which is probably all of them).


Retraction, Retraction, what’s your function?

When you re-deploy your add-in, it sometimes needs to retract the one that’s already there, and many issues arise.

One is, retraction can dodder on indefinitely, causing you to need to cancel the operation (the option should appear under the Build menu). But then you will need to rerun retraction, which is fine, except when SharePoint thinks the add-in is retracted.

A couple times I’ve had to go onto the deployment site, and delete the add-in, but even when cleared from the recycle bin and second stage recycle bin, SharePoint still believes it’s there!

One solution I found is to tick up the version (after you’re certain the add-in is removed), this way, SharePoint will not insist the app is there with the same version, because you changed the version:


(h/t this thread)

Confused schema

It’s not enough to have lists with separate schema settings, if their Type field is the same, SharePoint may confuse the schema, and you will see two lists with the same field set, which is jarring, really. Even if you completely delete the list, and recreate it, remapping every association it had in the project, SharePoint may find the schemas of the second list confusing.

One user found the solution was to set the type of the second list to 1000, and tick up as you add even more lists with possibly confusion inducing schemas.

(h/t this thread)

List Field Default Values

It’s not that there’s an issue with setting default values for your list fields, its just that when the UI is so comprehensive as it is in Visual Studio, it can be perplexing to need to do something on the code-level, especially setting a default value of a field which is standard procedure.

To do this, you have to edit the schema of your list, and add a sub-element <Default> within the <Field> element:


(h/t to this thread)

Hiding list fields from the New, Edit, and View forms

Not every field of your list should be shown on every form, unless you want to give the user that sense of vague formlessness so common in modern art today. But to hide the field, you have to specify a series of values in the list Schema.xml.

Go tho the <Fields> element, and find the fields within that you want to show or hide, by adding the following (True/False) properties:

  • ShowInNewForm
  • ShowInEditForm
  • ShowInViewForms



Alas, there are many answers to this on various developer forums, and none of them work.

Basically, your add-in is going to situate itself in an iframe that loads content which is outside of the site it gets added to, so it’s not really aware of some key globals such as _spPageContextInfo, or SP.ClientContext.get_current(), and you can’t add scripts to fix the problem because ultimately…it’s in a separate place.

NOTE: If you need to talk to info that resides on the destination site, your going to need to use SharePoint’s cross-domain ajax functionality, which was beyond my purpose, but you can read about it here.

If you’re simply looking to using SharePoint REST API within your web part’s quarantined site/section/ward, you need to pass in that URL via the web part parameters. SharePoint has a bunch of dynamic parameters called tokens, listed in the second table (Tokens that can be used inside a URL) here, but you have to add them.


Once you add them, you can build the REST API calls off of them, especially the {AppWebUrl} parameter. I suppose you could do some text processing on the document.URL string to get this value, but that seems risky, and SharePoint ought to tell you where it put your add-in.

If you’re interested in making calls to the site you’re adding to, you’ll want to make use of the {HostUrl} parameter. This may require some permissions setup in the AppManifest, but that’s further than I went.

Showing a list on your Add-In’s default page

My add-in makes use of a couple of lists, one of which I want to show on that default page, which I figure should be the sort of administrative front-end of the app and provide the user with some web parts to let them perform basic content addition, but don’t expect SharePoint to just give one to you.

You have to edit the elements.xml of the Pages node in the project, and add the following within the Default.aspx page listing:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="">
  <Module Name="Pages">
    <File Path="Pages\Default.aspx" Url="Pages/Default.aspx" ReplaceContent="TRUE">
      <AllUsersWebPart WebPartZoneID="HomePage1" WebPartOrder="1">
            <webPart xmlns="">
                <type name="Microsoft.SharePoint.WebPartPages.XsltListViewWebPart, Microsoft.SharePoint, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c" />
                  <property name="ListUrl">Lists/Polls</property>
                  <property name="IsIncluded">True</property>
                  <property name="NoDefaultStyle">True</property>
                  <property name="Title">Polls</property>
                  <property name="PageType">PAGE_NORMALVIEW</property>
                  <property name="Default">False</property>
                  <property name="ViewContentTypeId">0x</property>

Now, in the Default.aspx page itself, you can add markup that refers to this element data you just added:


Wait, what’s wrong? You mean you want to see more than just the title field of your list in that web part?

I’m not 100% on this part, but this change worked for me:

Edit the <ViewFields> tag in the Schema.xml of your list where you find a <View> element with BaseViewId=”0″, Type=”HTML”…


Taking Add-In to the Market

I wanted to share, and possible earn some coin, off my frustration developing this supposedly simplistic add-in, but in doing so, I hit the following wierdness:

You’ve got to select at least one Supported Locale in the AppManifest, which was fine with me, I just wanted English to start, but if you just use the UI, it won’t add the setting properly. You need to go into the AppManifest.xml code-view and edit the xml so that it’s en-US (xx-XX).


Beyond that, the screenshots simply must be 1366px X 768px, naturally! Which seems a little unfair to little Add-Ins like mine which I had to pad-out with blank space because, seriously, it must be this size.

JSONP with Mendix REST Services

Mendix is an exciting platform because it simplifies something that tends to get overly-complicated: web-facing, record-based systems.

Because of Mendix’s platform ideology of rapid development, it isolates the developer from the guts of the java code it generates and compiles.

The problem is that many of the apps (modules) available from the App Store are not as flexible or feature rich as they might be, and some have the classic problem of backwards compatibility with a rapid-release IDE (in this case, the Mendix Business Modeler).

When Mendix started offering REST services, I took interest because it’s one of the better use cases for the platform, and I found that the REST Service module delivered both consume and publish services, and the setup was intuitive and worked well…with one exception: No JSONP support.

JSONP is a very simple modification to JSON output that simply wraps the JSON object of the response in a function name that you pass to it in a parameter usually named callback.

So we go from

[{"field1":"stringvalue", "field2":100}]


passedCallbackFunctionName([{"field1":"stringvalue", "field2":100}]);

Then, on the receiving side, the function named in the query parameters is either automatically invoked by libraries like jQuery (because it detects a parameter named callback in the request url), or manually by calling eval() on the response body.

This can be fixed, but requires an adventure into the java code layer that Mendix generally keeps out of sight.

To setup REST in Mendix, first download the Rest Services module from the App Store:


With this downloaded, you will see the following content in the tree view of the module in the Project Explorer:


You want to create a microflow to handle the initialization of your REST microflow endpoints, and set that in the Project Settings After startup property:



My Initialization microflow looks like this:


The first activity takes no parameters. The second needs and third need settings similar to the following:


So in my module named Search, I am exposing a microflow named “incidents”; giving it a wildcard for security role, which means anyone can access it; being unimaginative with my description; and specifying HttpMethod of RestServices.HttpMethod.GET enum value.

I do this again for another endpoint that serves up records from a entity table of requests.

I’m not going to go into the mechanics of the microflows I’m publishing, except to note that they produce JSON based on the fields of a non-persistable entity, which I populate via a persistable entity at the time of endpoint invocation. For this simple ticket system app, my domain model looks like this:


The search args will be automatically populated, based on the parameters in the query string.

For example, when you hit the running endpoint and specify contenttype=json, you get this result:



So the service publishes under the path


…But as you can see, this is not JSONP.


To have Mendix Rest Services offer JSONP support, you only need to make two tiny edits to the files, and which are added to your Mendix project when you add the Rest Services module.

To find this, go to the Project menu, and choose Show Project Directory in Explorer


And navigate to

<Your Project Path>\javasource\restservices


There’s, and is under the publish folder.

Add the following to


And in, here are the changes:


Here’s a zip of all the changes: Link

When you have your service compiling and running locally, you can deploy it to the Mendix Free App Cloud for further testing. NOTE: the service will go to sleep after a period without requests, so as tempting as it is, it’s probably a bad idea to leave it on the free app cloud.

So now you can make ajax calls your Mendix REST service endpoints from jQuery without worrying about Cross-Domain requests, which ajax now blocks.

Here’s a sample of one such request:

var mendixurlbase = "https://<your-hosted-app>/rest/incidents?callback=?";
$.getJSON( mendixurl, { 
    firstname: "James",
    lastname: "Gilbert",
    limit: 10,
    offset: 0,
    status: "Open",
    contenttype: "json"
}) .done(function( data ) {