I wanted to spend a few moments of my day off to respond to this article and why the statement is wrong.
James is making a statement that if you are a VAR and you have per-tenant extensions you should have DevOps in place to catch if that per-tenant extension is broken.
This may be true with the service that Microsoft provides today, but it does not allign with the orriginal idea of AppSource and Extensions.
The orriginal idea when Extensions and AppSource where “invented” was that the responsibility of notifying breaking changes to partners and customers would be with Microsoft, not with the VAR.
When a programmer at Microsoft checks in code and it is accepted by the code cops and procedures in place, the idea was that a script would be executed against all apps on AppSource and Per-Tenant extensions.
When a programmer at Microsoft makes a changes that breaks an extension the change needs to be actively refused and the programmer needs to implement it in a non breaking way.
If this is not possible Microsoft should actively work with the ISV and VAR/Customer to make sure that all parties are informed of the change. If the change cannot be implemented in such a way that the owner of the extension is forced to rewrite the code Microsoft should compensate.
Remember that customers can expect from a company like Microsoft to provide a cloud solution that is robust and does not break all the time.
The fact that Microsoft decided to make the old Navision code the source of Business Central without refactoring it into microservices cannot be waved away at the expense of customers and VAR’s.
Please enjoy the rest of your day. Comments on this blog are disabled.
The year is 2014 and the world was spinning as it did until March this year with mass tourism and in person events.
With the release of NAV 2013R2 and later 2015 our community was just starting to embrace the three tier concept and the Role Tailored Client. Nobody had heard of events or extensions. The economy is booming and everyone is busy not worrying about the future.
In that year I first did a small project for Datamasons to connect their EDI solution to Dynamics NAV using Web Services. Lateron I would do a similar project helping the folks of Dynamics TMS connecting their solution to NAV using an architecture that was as decoupled as possible and easy to upgrade.
When these ISV’s asked me to publicly endorse their solutions I told them that I would endorse the decoupled architecture and promote the idea of using best of bread solutions that interface with NAV rather than doing everything in NAV & C/Side.
This was not the first time I called the wrathfulness of an ISV in our ecosystem to turn against me but it was the first time it went quite big and ugly.
It happed at the NAVUG summit and it created some tention around the event for those involved.
The reason for writing this blog now and reflecting against something that happened five years ago is that this week several events happened that made me think about that NAVUG event several times.
If Your Only Tool Is a Hammer Then Every Problem Looks Like a Nail
I would repeat myself too much if I would start talking again about C/Side and the habbit of our community to use it as a single tool to solve all problems. It’s in-line with the ERP heritage from the late 1980’s and 1990’s when interfacing essentially was non existing, the internet had yet to be invented/adopted and infrastructure was hard to maintain and share.
The large ISV solutions that we have in our ecosystem are all born in the same era and back in the days these were founded by young people (most often or always guys) in their garage working long hours to establish their brand.
Today most of them are in their 50’s or early 60’s worrying more about their legacy than they do about the future.
Back then I was just a bit too young to join that party which leaves in in the middle with no legacy to worry about and an open mind into the future.
It’s a cloud-connected world
Today we live in a connected world in which it has never been easier to open your application and share data and processes accross platforms and geography.
Microsoft did a fantastic job with Azure on the one side as the leading cloud platform for serverless applications and Business Cental and the PowerPlatform/CDS as cloud ready frameworks to build system applications.
With Business Central it has never been easier to design an open architecture that allows you as an ISV to keep your solution small and manageable while allowing your partners to handle edge cases by subscribing to events or exchanging data using the API.
For some reason, and I don’t really understand why, it looks like the larger ISV’s are not open to use this opportunity.
Many ISV’s have monolith applications that require “fob” files with thousands of objects to be inserted into your system. The reason for these monolith applications is the fact that they all try to solve all problems with the same software.
This is no longer nessesairy in the cloud world where you can break your application into multiple smaller components to start with, but you can also leverage the Azure stack to move parts of your application to Power Platform/CDS or even Cosmos, Docker, Microservice API’s etc.
Time. That’s the only answer I can think of.
Given enough time we will see what happens and who winns.
If I had to place a bet I would avoid the majority of the horizonal solutions on AppSource that have a tight connection to AL.
Instead I would bet on those that have a decoupled architecture and allow their software to be seamlessly connected into anything that understands Odata Query Language and HTML5.
Sometimes I just have to write my frustration away in order to clear my head. Don’t expect technical tips and tricks in this post, but maybe some inspiration.
Today I was absolutely flabbergasted. Both on Twitter and on LinkedIn (I am a social media junky) there were actually threads about Microsoft removing the WITH statement in AL. I was litterally like OMG! Go spend your time on the future!!
I’m not going to spend more time on this idiotic topic than this. AL is a horrible programming language and in my future programming career I expect to spend less and less time each year using it.
What does your toolbox look like?
My father-in-law, may he rest in piece, could litterly make anything with his hands. He was a carpenter as a proffession but he could paint, masonry, plastering, pave roads, you name it and he could do it as long as he has the right tools, a good mindset and look at someone do it for a while to pick up some tricks.
As programmers we seem to be married into languages and frameworks and I can only guess why this is the case. In the old world were we came from which was called “On Premises” it was hard to have multiple frameworks, operating systems and databases work side-by-side.
THIS IS NO LONGER TRUE!!! WAKE THE F*CK UP!!
We live in a new world called cloud, preferably the Microsoft Azure cloud and in this new world frameworks, databases and programming languages co-exist side-by-side just fine. Not C/Side is your toolbox but Azure is!
How I am migrating our 200GB+ Database to Business Central with 2000 custom objects? BY USING AZURE!!!!!
– Mark Brummel –
Quote me on that.
For the last year or so I’ve been preparing “our” Business Central SAAS migration and the first thing I did was NOT look at AL code and extensions. The first thing I did was to implement Azure Blob Storage.
The second thing I’ve implemented was Azure Functions replacing C/AL code with C# code.
Number four on my list was Logic Apps to replace Job Queue processes scanning for new files and enhance our EDI
Right now we are implementing Cosmos Database, with Logic Apps and custom API to reduce our database size and improve scalability of our Power BI
FIVE PROJECTS to move to Business Central SAAS WITHOUT a single line of AL code written and we started our project about 18 months ago.
The plan is to move to Business Central SAAS within the next 24 monhts with as few AL customisations as possible.
You know what is funny? The things we are moving OUT of Business Central are the things that make us agile. These are the things that we always have to make ad-hoc changes to why we love C/Side so much.
Please implement a new EDI Interface. Boom, done. With Logic Apps and an Azure Function.
Please change this KPI. Boom, done with Power BI.
Please make this change to the UI. Boom, done with Meta UI.
Oh, and off-course to not forget my friends in Denmark.
Please change the layout of this report. Boom, done with ForNAV!
My frustration is probably not gone, it won’t be gone as long as I read people on the internet still treating AL as if it were C/AL WHICH IT IS NOT!
Fortunately I have a fantastic new job at QBS which allows me to evangalise thinking out of the box and helping people get started with Azure. Only last week in a few hours I got a partner up and running with an Azure Tenant running Business Central on a scalable infrastructure to run performance tests.
Telemetry is everything, you cannot have enough data when users start asking you why the system is behaving differently than yesterday or performance is changing over time.
This is where Azure SQL stands out from On Premises. You can get so much more data and in an easy way to analyse.
However, you need to know where to find it because not everyting is setup automatically after you create a database. Some is, some is not.
This blog is about how to connect Azure SQL Analytics to your Azure Monitor.
The steps how to do this are described in this docs entry and I don’t want to repeat existing documentation. I will add some screenshots of some results for a 220 GB Microsoft Dynamics NAV database with 80 concurrent users.
After you have activated Azure SQL Analytics it will not be visible for a while. It takes time in the background to be generated and but together by the Microsoft Minions who control your tenant in the background. Remember that these Minions have labour contracts and a rights to have a break every now and then.
Step 2 – Azure Monitor & More…
When the Minions are finished the data will show up in Azure Monitor. Search for it in your environment
And then, at least in my case you have to click on More…
This should show a link to your Azure SQL Analysis. In my case with two databases. DEV and PROD.
Step 3 – The Dashboard
The first dashboard you’ll see is something like this, except for the fact that this shows data 24 hours after activation and we had a busy friday with a performance incident. I’ll get back to that.
There are some interesting statistics here already visible like wait stats, deadlocks and autotuning. I’ll handle wait stats in this blog and maybe I’ll get back to deadlocks and autotuning later. There is a “good” reason the autotuning is red and I’ll look at that tomorrow (sunday) when nobody is working on the system.
Step 4 – Drill Down | Database Waits
If we drill down into the Database Waits we see more details on what types of waits we are dealing with here.
It does not help looking at these waits without narrowing down into specific moments in time when “things go wrong” because specific events relate to specific wait stats and some waits are just there whether you like it or not. We all know CXPPACKET because NAV/Business Central fires a lot of simple queries to the Azure SQL engine resulting in CPU time wasted. There is not much you can do about that. (As far as I know).
Step 5 – Houston we have a problem!
It’s 3:51pm on friday afternoon when my teammate sends me a message on Skype that users are complaining about performance. Since we just turned on this great feature I decide to use it and see what goes wrong.
We drill down again one more time and click on the graph showing the waits.
Note that this screenshot was created a day after the incident but it clearly illustrates and confirms that “someting” is off around the time my teammate sent me a message. The wait time on LCK_M_U goes through the roof! We have a blocker in our company.
Hey, this is KQL again!
Now we are in a familiar screen, because this is the same logging that Business Central Application Insights is using. Drilling down into the graph actually generated a KQL query.
Step 6 – What is causing my block?
To see what query is causing my block I have to go back to the Azure Dashboard and click on Blocks like this
From here we have two options. If I click on the database graph I get taken into the KQL editor and if I click on a specific block event I get a more UI like information screen. Let’s click on the latter.
Step 7 – Get the Query Hash
This is where it get’s nerdy. The next screen shows the blocking victim and the blocking process.
It also shows a Query Hash.
This is where I had to use google, but I learned that each “Ad-Hoc” query targetted against SQL Server gets logged internally with a Query Hash.
Since NAV/Business Central only used Ad-Hoc queries we have a lot of them and it’s important to understand how to read them.
What worries me a bit here is the Blocking Process’ Status which is sleeping. I have to investigate this more, but I interpret this as a process that went silent and the user is not actively doing something.
Step 8 – Get the Query
Using Google I (DuckDuckGo actually) also found a way to get these queries as long as they still exist in the cache of your SQL Server. Simply use this query
SELECT deqs.query_hash , deqs.query_plan_hash , deqp.query_plan , dest.text FROM sys.dm_exec_query_stats AS deqs CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest WHERE deqs.query_hash = 0xB569219D4B1BE79E
This will give you both the query and the execution plan. You have to use SQL Server Management studio to execute this against your Azure SQL Database
Step 9 – Restart the service tier
Unfortunately for me this journey resulted in having to restart the service tier. We could not identify the exact person/user who had executed the query that was locking. Maybe we will be able to do that in a future incident since I’m learning very fast how to use this stuff and time is off the most essence when incidents like this happen on production environments.
Needless to say that the NAV Database Locks screen was not showing anything. I would have used that otherwise off course.
In my series around Application Insights for Microsoft Dynamics Business Central / NAV this is probably the most booring one. However it is quite important. In order to teach you folks about KQL and the Application Insights API etc.
Step 1 – Create Application Insights
In your Azure Tenant search for Application Insights and select Add.
There is not much to fill in here. The Resource Group is probably most important if you have a bigger Azure Tenant. You want to group your stuff together.
Step 2 – Grab the key!
After the resource is created grab the key to your clipboard and now leave the Azure Portal and move to the Business Central Admin Portal
Step 3 – Put the key in Business Central and Restart your system
Step 4 – Analyse the data
But that’s for the next blog, about KQL. This will be a language at least 1 person in your company needs to master. Definately.
Wait… is that all??
Essentially yes, but there is a caveat…
The million dollar question is probably whether or not to pot multiple customers into one Application Insights resource.
This probably depends on one question. Does your customer want to access the data? If they do, the data needs to be in it’s own AppInsights resource so you can grant your customer access.
The good news is, and we’ll get to that, is that you can query accross application insights instances.
And in fact, in this enum value, I do want a default implementation since “Empty” is a fallback since I want to use the new expandable and collapsable row feature in BC16.
The solution: this is a property on Enum level
The motivation here for me to work with an Enum and an Interface is that we have a partner that want’s to implement a feature called “multiple layouts” that we think does not fit with the simplicity we have in mind for our core product.
This allows the partner to create a new App in AppSource with a dependency on ForNAV that introduces new features that only a subset of our customers need.
The majority of our customers is not burdoned with unnessesairy complexity while the few who need it have a solution they can subscribe to.
That my friends is what we mean with Extendability by design.
Another quick tip for something I’ve used this week to help out a QBS partner with performance issues on Business Central.
Since the last release it’s possible to issue read-only commands on a real-time copy of your Business Central database by using the DataAccessIntent property.
This allows API Pages, reports and queries to be executed outside of your production database which is ideal for Power Apps, Power BI and websites that for example only show status information on outstanding orders.
Then I remembered, “off course” we can also use that with the ForNAV report pack for financial reports that run longer, like the Inventory to G/L Reconciliation. (Which already runs 10 times faster than the out of the box version).