I recently did some work on a somewhat neglected C# code base; the basic function of the application was to pull down some custom reports from a third-party web API, parse the results, and store them in a database. There were a few obvious design problems, and the application hadn’t been built or deployed in years, but it did include a small test suite–around 50 unit tests, written with Microsoft’s testing framework.
View Post
The past few weeks at work, I’ve been building out a process for automation of a Microsoft Tabular Model project; in a previous post I described how we automated the deployment process, and this post will focus on testing. A full, working sample project is available on GitHub.
The Problem: Testing Measures In SQL terms, a measure is somewhere between a view and an analytic function; it’s a calculation used to dynamically aggregate and filter report tables, and it can contain a fair bit of logic.
View Post
My team recently starting using Microsoft’s Tabular Model databases at work, as an intermediate layer between an operational data store and the end users who consume this data from Power BI. Tabular Models are an OLAP technology, providing an in-memory data cube, with measures being defined using either MDX or DAX query languages.
As we were learning about the stack, we realized that there wasn’t much documentation around automated deployment or testing of Tabular Models; the typical deployment story seemed to be “right click -> publish” from Visual Studio.
View Post
We’ve had some interesting discussions on our team recently, about the level of testing required for some very declarative sections of our codebase. I’ve been thinking about this subject a lot, especially after reading a recent post by Kent Beck.
The fundamental problem we were trying to solve was this: We have a Python dictionary that represents an entity–in this case, a person. All of its values are strings:
person = { "Name": "Bob", "Age": "42", "Rate": "2.
View Post
I’ve been working to add support in slap for publishing Geoprocessing (GP) services; the workflow outlined here will work with both Python Toolboxes as well as the older, binary .tbx file format.
Similar to map services, it’s possible to publish geoprocessing services as part of a docker build; all of the scripts, tools, and dockerfiles used in this example are available on github:
slap base docker images test image test data and scripts Publishing from a Result File The createGPSDDraft method takes either a Result object or a path to a result (.
View Post
A lot of smart people have worked to get ArcGIS Server (AGS) up and running in a docker container; it’s a fast and convenient way to test applications that require an AGS instance to function.
It’s possible to streamline things even further by registering data and publishing services as part of the docker build process; this allows you to keep your infrastucture, data, and service definitions under source control, and use those to build an image with all the necessary services included.
View Post
I’ve spent a good deal of time helping teams adopt automated testing as a part of their workflow; one thing that I’ve often heard from folks who aren’t familiar with the methodology is the following:
“How much time do you estimate for writing unit tests?”
Usually what people are looking for is something like, “I need X hours for the production code, and Y hours for writing tests for this feature.
View Post
I’ve seen a lot of GIS developers struggle to create a good project structure when building Python applications; often there’s a transition from one enormous file with a single method to a “real” software project, with modular design, well defined dependencies, and the necessary tooling.
The goal of this post is to be a summary and short checklist; these steps can improve almost any project, and are easy to implement. Used properly, they can help ease developer onboarding, promote code reuse, and reduce the time spent on boilerplate activities and code.
View Post
Virtualenv allows you to create a repeatable, isolated environment for your project and its dependencies, without worrying about what packages and versions are installed globally on your development machine. This is a standard tool for most python projects, but since arcpy is installed as a separate, global package, using virtual environments is a little more difficult.
There are a couple of approaches to tackling this problem; either adding a .pth file to the local virtualenv, or by using the --system-site-packages flag.
View Post
Test fixtures can help us run our each of our tests against a clean set of known data, but arcpy can still throw a few curve balls at us–there are global singletons we need to be aware of (i.e., arcpy.env), and many GP tools can have side effects (such as changing the current working directory) or limitations of their own (i.e., the Project and Copy tools don’t support in-memory workspaces). In addition, just loading arcpy has a significant performance cost, and we want our tests to be as fast as possible.
View Post