Thoughts on Software Development Life Cycle (part 3)

This is part 3 in a series of software development. The earlier parts are available here:

In this episode, we want to pick up where we left of, and discuss how a small company like ourselves, can effectively still have an efficient software validation process, without slowing things down and ending stranded in a hopeless bureaucracy.

Indeed it’s a bad joke that, once you have your software validated, (CE) certified, and government approved, you stop developing it any further, because the overhead (i.e. cost) of going through the whole process again, doesn’t justify the (typically small) incremental gains that are to be had from releasing a new version.

No Sir, at Pathomation we like to be creative and innovate, and we’ve always said that administration should not stand in the way of that. So what did we do? Like with our original software development itself, we looked at automation to provide part of the answer.

Remember the schematic from last time?

We came up with a whole datababase schema to put behind this and optimize workflow from one type of documentation to the next.

Custom granularity is what we’re after here, and we like to think we come pretty close!

Because SSMS isn’t known for user-friendly data entry, we also built a set of PHP scripts around it, so we can easily populate it (and most importantly: never have to worry about synchronizing Word-documents by hand again!)

We’ll use parts of the PMA.core project for sample data and illustrate our approach throughout the rest of this article.

Oh, and if you’re curious about the other product names you see in the list: have a look at our product portal.

Requirements and specifications

One of the key documents to keep track of when developing software are the user requirement specifications (URS).

After writing out a general description of what our software is supposed to do (the System Overview document – .sov extension in the schema), we can start to  capture in more granular detail what the different bells and whistles are going to be.

There’s a lot of room for interpretation at this level: Since PMA.core’s bread and butter is supporting as many different slide scanners as possible; Each file format is a separate URS entry.

For each URS, a specification is written up. Subsequently, tickets can be assigned to it. Tickets can originate from different locations:

Talkback is our historical original ticketing system based Corey Trager’s excellent BugTracker.Net project.

As we grew our team, we outgrew BugTracker and upgraded to Jira.

Features can be the result of a helpdesk-ticket. Keeping in mind what requirements originate from actual user requests (and not Pathomation’s CTO’s crazy brain) is useful to prioritize.

A completely annotated URS ends up looking like this:

We have all information in a single page. What a difference from a few years ago, where we had to puzzle these pieces together from several Word-documents.

When are you done with your software? When you’ve written sufficient tests to prove that all your user requirement specifications effectively work as intended.

Risk assessment and testing

In order to get a grasp on what “sufficient” testing means, a risk assessment has to occur first.

We provide product owners with a wizard-like approach to determine the risk analysis for each URS individually. Let’s have a look at this one:

The Risk Analysis then becomes as follows:

Do this for each URS, and you can come up with a granular test plan. In the future, we’d like to couple this back to the URS detail screen itself: A feature with high-risk, should have 3 tests; a feature with medium-risk, should have 2 tests; and a feature with low-risk can be sufficiently documented by providing a video.


Remember our original Word-mess?

We (and you too) still need these documents at the end of the day for filing. You can complain about paper generating, but when to think about it: it absolutely makes sense to still produce these textual snapshots.

Just look at it from the other side: Imagine you’re a regulatory agency and you get applications from 100 companies. Each company deposits a database and a set of scripts to interface it with the message “oh, just get the scripts up and running and you’ll get all our information. it’s super-easy, barely an inconvenience”. The only alternative then is for the agency to provide its own templates for all 100 companies to fill out.

Luckily that direction is easy from where we’re standing. What we do to provide the necessary regulatory documentation is extract the proper information from the database, format it properly as HTML, and then print those webpages to PDF documents.

A traceability matrix? Well, that’s just a matter of a couple of outer joins and summarize the outcome in a table.

Documentation. Validation. Done.

What’s left?

We used the technique described in this article to obtain our self-certified CE-IVD label for our PMA.core software.

We also worked with external independent consultants to make sure that our technique would indeed withstand outside scrutiny. After all, any software developer can self-certify their own software.

It’s important to note that in addition to keeping track of a number of documentation items, we also performed a validation study in two hospitals, with two different slide scanners. This confirmed everything we did so far, indeed applied to a clinical setting as well.

Now it’s onto PMA.core 3.1. This will be a minor release, with a focus on additional security features like LDAPS and improved IAM / EC2 integration.

For PMA.core 3.1, we’re not doing the whole process described in here from scratch. How we solve this problem then in an incremental fashion, is food for a next article.

Thoughts on Software Development Life Cycle (part 2)

See our earlier article on the processes that we developed at Pathomation to improve our software development practices.

Software validation

Much has been written about software testing, verification, and validation. We’ll spare you the details, subtleties, and intricacies of these. Fire up your favorite search engine. Contact your friendly neighborhood consultant for a nice sit-down fireside chat on semantics if you must.

Suffice it to say that validation is the highest level of scrutiny you can throw against your code. And, like many processes, at Pathomation, we take a pragmatic approach.

Pragmatic does not need to mean that we bypass rules or cut corners. Au contraire.

The need

The process sometimes gets a bad rep. If software validation is so involving, why even do it at all? After all:

  • You have a ticketing system like jira or Bugzilla in place, right?
  • You have source code control like git or svn in place, right?
  • Your developers can just test their code properly before they commit, right?
  • Right?…

Anybody with any real-life experience in software development knows it’s just not that simple.

At Pathomation, we have jira; we have git; we add Slack on top of that for real-time communication and knowledge transfer; etc. And yet, there was something missing.

Consider the following typical problems during non-validated SDLC:

  • Regression bugs
  • Incorrect results
  • Wrong priorities
  • Bottlenecks and capacity planning problems

Are you rolling your eyes now and thinking “Du-uuuuh”? Do you think these are just inherent to software development in general? Well, let’s see if there’s a way to reduce the impact of these at least a bit.


Writing software is sort of like a manufacturing process. So with terms like GLP (Good Lab Practice) and GMP (Good Manufacturing Practice) already in existence, it made sense to expand the terminology to GAMP, which stands for Good Automated Manufacturing Process (and yes, is derived from GMP).

In essence, GAMP means that you’ve documented all the needs that your software addresses, have processes in place to follow-up on the fullfillment of these needs (the actual code writing process), and subsequently have a system in place to provide that the needs are effectively met.

GAMP helps organizations to prove that their software does what you say it does.

There a different levels of GAMP, and as the levels increase, so does the burden of proof.

  • Gamp levels 1-3 are reserved for really widespread applications. Think about operating systems and firmware. When you install Windows on a server, you expect it to do what it does. You may still check out a couple of things (probably specific procedures tied to your unique situation), but roughly speaking you can rely on the vendor that the software does what you expect it to do, and that bugs will be handled and dealt with accordingly.
  • Gamp level 4 is tied to software that can still be considered COTS, but is somewhat less widespread than, say, an operating system. It may be an open source application that you think is a good fit in your organization: it may have a wide user base, but it’s hard at the same time to beat the resources of the big tech companies. A certain level of scrutiny is still warranted.
  • Gamp level 5 is for niche software applications. It requires the highest level of checks, tests, and reporting. To some extent, everybody that builds their own software (including the big techs) is expected to do their own Gamp 5 validation.

We like to brag that we see a lot of users. But regardless how many satisfied users we have, we’ll never come even close to software that has the user bandwidth of Microsoft Office, Google Chrome, or even specialty database management systems like MongoDB or Neo4J.

PMA.core (including the PMA.UI visualization framework) is niche and custom software. Therefore, all of its derived components must go through extensive Gamp 5 validation procedures.

Next stop: Amazon. Read instructions. Clear.

Hard times and manual labor

In principle, it’s all very simple: you document everything that you do and provide evidence that you’ve done it, and that at the end of the day things work as planned and expected.

But at the very least, you need somebody to monitor this entire process, and, most importantly: keep it going. So we did contract with an external organization, and it sort of worked. That is, after a lot of frustration, we ended up with a list of documents that was good enough to claim that version 1.0 of our software was now validated:

The experience was not a fun one; nor a creative one; nor a productive one; nor… There were many things it wasn’t. In typical Pathomation (remember, we’re rebels at heart) we started wondering how we could improve the process. We identified two major bottlenecks:

  • Lack of involvement: it’s all too easy to throw money at a problem hoping that it will go away. It doesn’t. Read our separate rant about consultants for a somewhat different perspective on the consultancy world.
  • Inefficient procedures. No, wait, that’s too polite. How about hopelessly obsolete antiquated workflows? Getting there. Except for the word “flow”. What we did; it didn’t flow at all; think of molasses flowing; or lava…

Essentially we ended up sending back and forth a bunch of Word document. A lot of them… and they were long…

And you dread the moment when you want to add anything afterwards, because that involves making modifications in long documents that all need to reference each other correctly. Like below:

A user requirement specification (URS) needs to have functional specifications (FS), followed by technical specifications (TS). Since the all these are spread out across separate Word documents, you need a fourth document to track the items across the documents; a traceability matrix (TM). The TM is stored as an Excel spreadsheet, because stored tables in a Word document would just be silly… apparently??

They say insanity is repeating the same process over again and expecting different results, right? That was our conclusion after our first couple of iterations and experience with the software validation process as a whole.

A tunnel… with light!

Realizing that we would first and foremost have to take more ownership of the validation process, we thought about tackling the “not a fun one; nor a creative one; nor a productive one” accolades. Pathomation is an innovative software company itself. Could we put some of that innovation and software savviness into the software validation process itself, too?

We started by looking back at the delivered documents from our manual procedure. After some doodling on paper, we deduced a flow-chart that we agreed would be pretty much common to each validation project:

Our 30k view in place, our next step was to start thinking about what the most efficient way could be to fill it our for new products and releases going forward. That is the story we’ll be elaborating on in part 3 of this mini-series.

Thoughts on Software Development Life Cycle (SDLC) – part 1

What we do

At the end of the day, what we do is straightforward: Pathomation makes middleware software for digital pathology.

Now depending on who you talk to, one or more terms in that statement may take some explaining:

  • Pathomation: it’s not phantomation (ghostbusters, anybody?), photomation (photonics, quantum physics; nah, to be honest we’re probably not smart enough)… It’s Pathomation, from Pathology – Automation.
  • Middleware software: Middleware acts as a broker between different components. Your typical example of middleware would be a printer driver, which allows a user to convert text and pixels from the computer screen to ink on paper. On that note, “pixel broker”, is another way how we like to describe ourselves. The slide scanner converts the tissue from the glass slide into a (huge!) collection of pixels, and we make sure these pixels can be read and presented properly, regardless of what scanner was used to generate them, and regardless of where they are stored
  • Digital pathology: it started in the 60s at Massachusetts General Hospital in Boston with tele-pathology. In the 2000s, engineers figured out that they could automate microscopes to take sequential pictures of various regions of interest on glass slides, and then stitch those to create (again: huge!) compound images that would let pathologists bypass the traditional microscope altogether and use the computer screen as a virtual microscope.
  • Pathology: the medical sub-specialty that diagnoses disease at the smallest level. Up to 70% of therapeutic treatment is attributable to one pathological exam or another. That’s… huge actually, and it’s why it’s all the more important to make the pixels from the scan flow to that storage location to that computer screen as smooth as possible.

So there you have it: we make middleware software that optimizes the transport of pixels. We show the pathologist the pixels he or she needs, when he or she wants, where he or she desires to have them.

The central piece of software we develop for that is called PMA.core, and on top of PMA.core we have a variety of end-user front-end applications like PMA.slidebox,, or PMA.control.

Growing software

We didn’t start off with these though. So bear with us as we take a little trip down memory lane.

Once you have a successful piece of software, it doesn’t take long for people to ask “hey, can I also use it for this (or that)?”. Therefore, on top of PMA.core, we built several Software Development Kits (SDKs) that make it easier for other software developers to integrate digital pathology into their own applications (image analysis (AI/DL/ML), APLIS, LIMS, PACS…).

The next question is: “I don’t know how to program. I just have a WordPress blog that I want to embed some slides in.” So we wrote a series of plugins for various Content Management Systems (CMS), Learning Management Systems (LMS), and even third-party Image Analysis (IA) tools.

Eventually, we got into integrated end-user application development, too. As far as we’re concerned, we have three software packages that cater to end-users:

  • PMA.slidebox caters to educational applications whereby people just want to share collections with students, conference participants, or peers. A journal could benefit from this and publish virtual slides via a slidebox to serve as supplemental material to go with select author papers.
  • wants to be a pathologist’s cockpit. You can have slides presented in grid layouts, coming from different PMA.core servers. But if you’re an image analyst working with whole slide images (WSI), that works, too. Integrate data sources, and annotations. Have a video conference, and prepare high-resolution images (snapshots) for your publications… Do it all from the convenience of a single flexible and customizable user interface. If you’re an oncologist or surgeon that only occasionally wants to look along, may be a bit of overkill. But to build custom portals for peripheral users, you have those SDKs of course.
  • PMA.control goes above and beyond the simple collections that you place online with PMA.slidebox. With PMA.control, you can manage participants, manage what content they see at that what time, organize complex training sessions, organize the development and evaluation of new scoring schemes etc. With PMA.control, you are in… control.


How do we develop it all? We’re a small company after all, with a small team, and many people wear multiple hats.

First, we like technological centipedes. If your ambition is to become an expert on jquery, SQL, or vue.js, and only do that, Pathomation is not the place for you. Even when you’re a full-stack developer, we’ll encourage you at least to get out of your comfort zone and write an occasional Jupyter notebook to test someone else’s code as part of our QA process.

Second, we have a workflow process that is tailored to our size, and we use the tools that we find useful. We don’t treat Scrum and Agile as gospel, but we adapt what we think makes sense.

Pipelines and dashboards

Sometimes people think we’re crazy for supporting so many tools and technologies. Sometimes we think we’re crazy for supporting to many tools and technologies.

But the truth of the matter is: we really want to remain flexible. We strive to be the go-to digital pathology middleware solution on the market, and that’s not going to happen by carving out a niche within a niche (as in: “everybody should just use PHP to build digital pathology portals from hereon”). We could probably have a more comprehensive SDK if all we did was focus on Java, but we wouldn’t be able to help you with that if you come to us with a PyTorch algorithm or a Drupal blog.

All technologies come with their own peculiarities though, and as much as we adhere to the above centipede principle, no one can know it all. Or should even know it all.

So we’re big fans of dashboards and pipelines. To give you an idea:

  • We use standard applications like Jira, Slack, and Git extensively on a daily basis
  • We also have our own dashboard portal, that integrates meta-data from the above into KPIs that make sense to us.
  • A KPI is a KPI; we prefer knowledge. We’re lucky enough that we still have relatively short decision processes and everyone in the company understands what we’re doing. Because of this, we typically stay away from Google Analytics, because nobody gets anything out of a KPI.
  • Scrum and Agile are full of rituals. If you want to perform rituals though, your calling may be in religion rather than technology. Not that we frown upon religious callings. We’re just saying there’s a difference. Stand-up meetings are a good example. For high-profile projects, we’ve found stand-up meetings to be extremely valuable. But when that (phase of the) project is over, there’s no need to keep the meeting schedule around anymore. If we did that, it would be the only thing we would do anymore.
  • We look at standard methodologies as kitchen appliances; you use the ones you need when you need them. No need to institutionalize them. Some days you need a conventional oven; some days you need a microwave. There’s a reason they both exist. Chefs tell us that hybrid appliances that claim to do both, don’t really provide either functionality particularly well.

Does it work?

Sure, Pathomation is a small company, and we’re unconventional. Some call us rebels, renegades… Some partner with us exactly because of this even.

But of course, being rebellious cannot be a goal in itself; or even a merit.

What we’ve come to realize in the last year or so particularly is that products are only parts of the equation. What a company needs, in addition, to become successful is a set of processes.

The question then becomes: So does it work?

We think it does.

What it comes down to is that we do have a process that works now. Developers receive their input via Jira; managers can monitor respective projects through Jira dashboards. Build pipelines for different environments (no matter how diverse) are all organized through Jenkins. We’re on Microsoft Teams for meetings and chats, both at the office and at home. And for real-time technical support amongst the developers, there’s Slack.

Even though we use COTS software, we’ve still tuned it to match our own scale and pace. We still prefer on-premise installations instead of cloud, and we probably only have begun to scratch the surface of the possibilities in Jira (Confluence has recently been added to our arsenal).

There’s no right or wrong, or why you should follow our lead. Each company is different and should probably find what works for them. And realize that standardization can be a two-edged sword.

But even so: does it work? How do these processes in our case lead to better software?

We recently obtained our CE-IVD mark for PMA.core. So: yes.

But the tools and techniques described in this article don’t give the complete picture. So in a follow-up article, we plan to elaborate on what else was needed in addition to what we describe here, to get to yet another level of quality.

Extending PMA.core functionality with external scripts

Pathomation offers a comprehensive platform with a variety of technical components that help you build tailor-made digital pathology environments and set up custom workflows. Centered around PMA.core, our powerful tile server, everything else can be connected and integrated. You want a feature that PMA.core does not support or you have a script/code to implement it. Great! You can.

PMA.core allows the registration and execution of third party command line scripts via its External Scripts admin page and its Scripts Run API. You can utilize this to create anything you need from simple workflow tasks like batch renaming/moving of slides to fancy AI and tissue recognition algorithms. Assuming you have found one such a fancy script and you want to integrate it into PMA.core let’s see the process step by step.

Preparing for integration

As an example we will use the sample script (provided here) that recognizes tissue automatically using the OpenCV computer vision library and imports the recognized areas as annotations all in one go. The sample tissue recognition script requires Python and OpenCV to be installed and configured in your system. That script also requires some parameters to be executed successfully like the PMA.core server url, username and password and the path to the slide to analyze.
For this reason we will create an intermediate .bat file to facilitate the execution of the python script with the correct parameter values as described bellow(replace \path\to\script with the path the script is located).

python.exe "\PATH\TO\SCRIPT\" -l %2 -u %3 -p %4 -t %5 -f %1

Registering a script

To register your script with PMA.core navigate to the Settings -> External Scripts page and click Add.

Adding a new external script with PMA.core interface

You need to provide the following information describing our script and the parameters required to execute it.

  • Name: A unique name for the script used to fetch it and distinguish from other, for our example enter Automatic tissue annotations
  • Command: The command line to execute, a bat file, cmd file or exe, for our example enter \path\to\script\AutoAnnotator.bat
  • Arguments: The arguments passed to the script, for our example enter {slidePath} {serverUrl} {username} {password} {targetDirectory}
  • Parameters: A dynamic list of parameters passed to the script by PMA.core, for our example enter the list of parameter as shown in the following image.
The settings required to register our script

In the arguments section any text you enter will be passed to the executed command as is. An exception to this is the text enclosed by curly brackets (for example {slidePath}). If the text inside the curly brackets equals a parameter name, PMA.core will replace it with the value supplied at the execution step. PMA.core supports three types of parameters that validates accordingly at execution phase: String, Number, Path. Number parameters are validated as a floating point value, and Path is validated by checking the existence and permissions of the specified value.
After clicking Save you should see your newly created script and its settings in the main index page

List of external scripts registered to PMA.core

Executing the script

Now that we have registered the script correctly we can execute it using the Scripts Run API. In the index page of the interface on the url column you can copy/paste a helper url with all the required parameters to execute the script. In our example the helper url is:

/scripts/Run?name=Automatic tissue annotations&slidePath={Path}&serverUrl={String}&username={String}&password={String}&targetDirectory={String}

and we will replace {Path} in slidePath with the virtual path to the slide we want to annotate, the {String} in serverUrl with the PMA.core serverUrl, the {String} in username with our PMA.core username, the {String} in password with our PMA.core password, and the {String} in targetDirectory with the path to a local temporary folder.

After executing the script using the API you will get a JSON response with the following fields:

  "ScriptName": The executed script name,
  "Success": A boolean value indicating whether the script executed successfully or not,
  "ErrorMessage": An optional error message if any occured,
  "Result": The output of the script 

After a successful execution of our script in a sample slide you should be able to see the generated annotations containing the tissues as recognized by the script’s algorithm.

The final result of the tissue recognition algorithm

Required files

Automatic tissue recognition python script (

PMA.UI on React.JS with Collaboration using

PMA.UI is a Javascript library that provides UI and programmatic components to interact with Pathomation ecosystem. It’s extended interoperability allows it to display WSI slides from PMA.core, PMA.start or Pathomation’s cloud service My Pathomation. PMA.UI can be used in any web app that uses Javascript frameworks (Angular.JS, React.JS etc) or plain HTML webpages. (now called PMA.collaboration) is a Javascript library that provided functionality to share your PMA.UI app/page with other users keeping all parts of the page synchronized, allowing for drawing annotations simultaneously, share multiple viewports just like with multi-head microscopes but over the internet.
We will create an example React application and integrate both PMA.UI and as an example. You can use an existing React application or create a new one using create-react-app e.x.

npx create-react-app CollaborationReact

First we have to install PMA.UI and library using npm by running

npm i @pathomation/pma.ui
npm i @pathomation/pma.collaboration

inside application’s directory.

Next step is to add JQuery in our app. Open index.html file inside public directory and add

<script src=""
    integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script>

in the head tag.

It’s time to implement the main functionality of PMA.UI and fire up 4 simultaneous slide viewports in the same page. We will use the PMA.UI components to easily do that namely the context, slideloader and autologin. So go ahead and replace the default App.js with the following initialization page

const imageSet = ["Reference/Aperio/CMU-1.svs", "Reference/Aperio/CMU-2.svs", "Reference/3DHistech/CMU-1.mrxs", "Reference/3DHistech/CMU-2.mrxs"];

const createSlideLoader = (context, element) => {
  return new SlideLoader(context, {
    element: element,
    overview: { collapsed: false },
    dimensions: { collapsed: false },
    scaleLine: true,
    annotations: { visible: true, labels: true, showMeasurements: false },
    digitalZoomLevels: 2,
    loadingBar: true,
    highQuality: true,
    filename: true,
    barcode: false

function App() {
  const viewerContainer = useRef(null);
  const slideLoaderRefs = useRef([]);
  const context = useRef(new UIContext({ caller: "Vas APP" }));
  const [slideLoaders, setSlideLoaders] = useState([]);
  const location = useLocation();

  useEffect(() => {
    if (slideLoaderRefs.current.length == 0) {
      slideLoaderRefs.current = [...Array(20)].map((r, i) => slideLoaderRefs[i] || createRef());
    let autoLogin = new AutoLogin(context.current, [{ serverUrl: pmaCoreUrl, username: username, password: password }]);

  }, []);

  useEffect(() => {
    if (slideLoaderRefs.current.filter(c => !c || !c.current).length > 0) {

    let slLoaders = [];
    for (let i = 0; i < slideLoaderRefs.current.length; i++) {
      slLoaders.push(createSlideLoader(context.current, slideLoaderRefs.current[i].current));

  }, [slideLoaderRefs.current]);

  return (
    <div className="App">
      <div ref={viewerContainer} className="flex-container">
        {slideLoaderRefs.current &&, i) =>
          <div className={"flex-item"} key={i} ref={slideLoaderRefs.current[i]}></div>)
    </div >

export default App;

To properly show all 4 viewers in the same page we need some css to style it up, so we need to add this to index.css. This will split the page to a grid of 2×2 viewers using css flex.

.flex-container {
  display: flex;
  flex-direction: row;
  flex-wrap: wrap;
  width: 100%;
  height: 850px;

.flex-item.pma-ui-viewport-container {
  flex: 0 0 50%;
  height: 400px

.ml-1 {
  margin-left: 15px

Well that was easy to set up!


So let’s synchronize this page for all the user’s joining so they can see and interact with the same slides. For this we will be using the server and to pma.collaboration package we installed earlier. To enable users to collaborate they have to join the same session, as it is called, one user will be the master of the session which controls the current viewports, slides and annotations(even though this can be changed with a setting for all users with a setting called EveryoneInControl). uses SignalR and the WebSocket protocol to achieve real time communication between participants, so we need to include this scripts in our page. We can include this scripts in the index.html as we did for jQuery, but we need to be sure that the scripts are properly loaded before trying to initialize any PMA.collaboration in our React application. So we will use the same trick used by Google Maps to load the scripts asynchronously and notify our React app when they are ready. So create a new file called collaborationHelpers.js with the following function.

export const loadSignalRHubs = (collaborationUrl, scriptId, callback) => {
    const existingScript = document.getElementById(scriptId);
    if (!existingScript) {
        const script = document.createElement('script');
        script.src = `${collaborationUrl}bundles/signalr`; = scriptId;
        script.onload = () => {
            const script2 = document.createElement('script');
            script2.src = `${collaborationUrl}signalr/hubs`;
   = scriptId + "hubs";

            script2.onload = () => {
                if (callback) {
    if (existingScript && callback) {

To notify our React app that the scripts are ready and to proceed with the initialization we need to create a new state in the App.js page called loadedScripts which we will set to true when the scripts are loaded in our previous useEffect function
useEffect(() => {
    if (slideLoaderRefs.current.length == 0) {
      slideLoaderRefs.current = [...Array(20)].map((r, i) => slideLoaderRefs[i] || createRef());

        let autoLogin = new AutoLogin(context.current, [{ serverUrl: pmaCoreUrl, username: "zuidemo", password: "zuidemo" }]);
    loadSignalRHubs(collaborationUrl, "collaboration", () => {
  }, []);

So now everything is ready to establish a connection to the backend, and also joining a session (joining a non-existing session will just create a new one) 
const initCollaboration = (nickname, isMaster, getSlideLoader, collaborationDataChanged, chatCallback) => {
  return Collaboration.initialize(
      pmaCoreUrl: pmaCoreUrl,
      apiUrl: collaborationUrl + "api/",
      hubUrl: collaborationUrl + "signalr",
      master: isMaster,
      getSlideLoader: getSlideLoader,
      dataChanged: collaborationDataChanged,
      owner: "Demo",
      pointerImageSrc: "…",
      masterPointerImageSrc: "…"
    }, []).
    then(function () {
      var sessionName = "DemoSession";
      var sessionActive = false;
      var everyoneInControl = false;
      return Collaboration.joinSession(sessionName, sessionActive, nickname, everyoneInControl);
    then(function (session) {
      // after join session
      ////var appData = Collaboration.getApplicationData();
      if (isMaster) {
        Collaboration.setApplicationData({ a: 1 });

The initialize method tells to the Collaboration static object where to find PMA.core, backend, whether or not the current user is the session owner, what icons to show for the users’ and the master’s cursor and accepts a couple of callback functions. The joinSession method will create a session if it does not exist and then join it. If the session doesn’t exist, it is possible to specify whether or not it is currently active and if all users can take control of the synced viewports or only if the session owner can do this. Once the session has been created, only the session owner can modify it and change it’s active status or give control to others.

In order for to be able to sync slides, it has to know the viewports it should work with. Earlier in our code we created an array of maximum of 20 slideloaders which we kept in a react ref object. Now let’s go back to the implementation of the “getSlideLoader” callback that we used during the initialization of This function will be called by when it needs to attach to a viewport in order to control it. So we will need to return the appropriate slideLoader from this React Ref array with this function

const getSlideLoaderCb = (index, totalNumberOfImages) => {
    if (!master && totalNumberOfImages < numberOfImages) {
      for (let i = totalNumberOfImages; i < numberOfImages; i++) {

    return slideLoaders[index];

So now we can initialize the collaboration in a useEffect which executes after the SignalR and Hubs scripts are properly initialized

useEffect(() => {
    if (loadedScripts && viewerContainer.current && slideLoaderRefs.current && slideLoaders.length > 0) {
      if (slideLoaderRefs.current.filter(c => !c || !c.current).length > 0) {

      if (collaborationInit) {

      initCollaboration("demo user", master, (index, totalNumberOfImages) => {
        return getSlideLoaderCb(index, totalNumberOfImages);
        () => {
          let data = Collaboration.getApplicationData();
          let session = Collaboration.getCurrentSession();
          setCollaborationData({ data: data, session: session });
        .then(() => {
  }, [loadedScripts, viewerContainer, master, slideLoaders.length, collaborationInit, slideLoaderRefs, slideLoaderRefs.current.length]);

Finally, let’s talk about the “collaborationDataChanged” callback. Besides the out of the box slide syncing capabilities, it gives you the ability to exchange data between the users, in real time. This could be useful for example if you wanted to implement a chat system on top of the session. Every user can change the session’s data by invoking the Collaboration.setApplicationData method. It accepts a single object that will be shared among users. Whenever this method is called, all other users will receive it through the collaborationDataChanged callback. To do this is a React way we simply set the application data to a React state object whenever the callback is called.

To allow other users to join our application as guests we will implement a query string parameter called master. When this parameter is set to false users joining this session will be guests. We keep this value in a React state called master. So we change our initial useEffect function to add this

var urlSP = new URLSearchParams(;
if (urlSP.get("master") === "false") {

Congratulations you’ve done it! You can now have a working React application with PMA.UI and collaboration featured enabled.

You can download a complete demo here

Additional developer resources are provided here

Custom panels and functionality in

Because sometimes you want more has a lot of functionality out of the box, but sometimes you want more. Having open on one screen and your organization’s (AP)L(I)MS on another is not always ideal. And what if you don’t have multiple screens?

As we pointed out before, offers a panel-based layout. The standard panels can be moved around and even stacked. The standard ribbon in offers some convenient default layouts, too.

Further configuration of is available through the Configure tab, where you can enable individual panels.

But what if you can’t find the panel that you’re looking for? Maybe the content that you’re looking for is at a different website, and if you just could have that particular page available withing as a separate panel…

Custom panels offers the possibility the add a custom panel w/ select content from a particular URL.

Let’s say that you want to have a reference website available next to your slide content. We’ll use our own wiki website at as an example.

You could start by pulling up in one browser, and our wiki in another browser. You start off nice and smooth like this:

But soon your layout gets messy. The 50% screen width really gives too much to the wiki, and let’s not even get into how easy it is to have that other browser window snowed under a ton of other applications (the word processor you’re using to write a paper in, your ELN, your EPR, your LIMS…)!

There’s a straightforward solution you. Click on the “More” button in the Layout group from the Home tab:

And a new dialog shows up. In addition to selecting any number of default panels, you can also define custom panels. Like so:

After clicking ok, your new panel appears, making your overall screen layout look like this:

Now that’s more like it!

This is already great for reference data, but what if you want to combine this with slide awareness? In other words: you want to have the content in the panel change automatically depending on the selected slide.

Passing parameters

The custom panel mechanism in automatically passes along references to other webpages that trace back to a current PMA.core tile server and selected slide.

This mechanism is typically hidden from plain view to reduce the functional complexity of it all, but a single line of PHP brings up the necessary data:


When we create yet another custom panel that refers to this new page, we see the following appear:

And let’s just say that we don’t like the way displays a slide’s thumbnail and label image. We’d rather just have those in separate panel, too, so we have more control over how they’re displayed.

We also know that the thumbnail of slide X can be reached via:


And the label image of slide X can be reached via:


We can therefore make two new scripts, that translate the received input parameters from, and translates those to the correct querystring variables for our respective thumbnail and label images:

$url = $_GET["server"].
header("location: $url");
$url = $_GET["server"].
header("location: $url");

We place the new files on a server, and reference them via two separate custom panels:

When we navigate the slides in a folder one by one, we now see that the panels are updated accordingly, too.

Other applications

In this post, we showed you how you can configure custom panels in exactly to your liking, and how the content of such panels can synchronize with currently viewed slides.

We showed you how to pass along information through some trivial examples, referring back to our own infrastructure.

Now you can build your interfaces, like we demonstrated in an earlier blog article. is more than just a universal slide viewer; you can turn it into your own veritable organizational cockpit. Think e.g. about custom database queries against your back-end LIMS, bio-repository, or data warehouse. You can show the data where you want it, when you want it, all with a few configuration tweaks. No longer do you have to juggle multiple browsers, as simply allows you to build your own custom dashboards.

Find out more about through our landing page at

Sharing facilities in

Let’s get together

Sharing content is arguably one of the most important applications of digital pathology, if not for the Web in general. allows you to share content in a variety of ways. There is a dedicated group for sharing content on the ribbon:

When you just want to share what you’re currently looking at, chances are that you can get by with one of the quick share buttons:

  • If you want to share the current folder you’re navigating, click on the “Share folder” button
  • If you want to share the current slide that you’re looking at, click on the “Share slide” button
  • If you want to share the current grid that you’re looking at, click on the “Share grid” button
  • Etc.

If you want more control over what and how you’re sharing content, you can click on the final “Share” button of the group. You could say that that’s our “universal” share button.

It allows for further customization of your share link, including:

  • Password-protect your link
  • Expire the link (e.g. students can only access it for the duration of a test)
  • Include or exclude annotations from the shared link
  • Use a QR code instead of a plain text link
  • Etc.

Our best advice is for you to play with the various options. But do let us know when you think there are some features missing or you think something is broken.

Share administration

We’ve worked hard on making the sharing concept in broadly applicable to a variety of content. We’ve also worked on making it easy to share content.

So with all this sharing going on then, it’s only natural to be asking after a while “wait, what am I actually sharing?”.

On the configure tab, in the “Panels” group, you can active the “Shared links panel”

Once clicked, you get a new panel with an overview of everything you’ve shared so far.

The buttons behind each link allow various operations.

One application of this is to recycle a share link and re-use as you see fit.

You can also (temporarily) invalidate links, or delete them altogether.

The history is linked to your PMA.core login, so if at first you don’t see anything, make sure that you’re connected to the PMA.core instance for which you expect to see share links.


On the back-end of, administrators can get an overview of all created shares across all users. They can also use this view to temporarily suspend or even delete shares.


While we highly advocate the implementation of the PMA.UI framework in third-party software like (AP)L(I)MS, PACS, VNA, and other digital pathology consumers, we realize that this is not trivial. In a proof-of-concept phase, all you may want to do is show a button in your own user interface that then subsequently just pops up a viewport to the content that you want to launch. Easy-peasy, as they say.

Let’s say that you have an existing synoptic reporting environment that looks like this:

In order to convince your administration that adding digital pathology to it is a really good idea, you want to upgrade the interface to this:

With, you can now get exactly this effect.

Let’s switch to Jupiter to see how this works:

.First some homekeeping. We import the pma_python core module, and connect to the PMA.core instance that holds our slide.

Our slide is stored at “cases_eu/breast/06420637F/HE_06420637F0001S.mrxs”. Let’s make sure that the slide exists in that location by requesting its SlideInfo dictionary:

Alternatively, we can also write some exploratory code to get to the right location:


Ok, we’ve identified our slide. Now let’s go to the Unfortunately, PMA.python doesn’t have a studio module yet, so we’ll have to interface with the API directly for the time being.

The back-end call that we need is /API/Share/CreateLinkForSlide and takes on the following parameters:

We create the URL that invokes the API by hand first. We can do this accordingly:

Never mind that pma._pma_q() method that we use. It’s a fast and easy way for ourselves to properly encode HTTP querystring arguments. You’re free to piggy-back on ours, or use your own preferred method.

After execution of the code, you get a URL that looks like this:


The URL by itself doesn’t do anything, but create the share link. So you still need to invoke it. You can do this by either copying the URL to a webbrowser, or by invoking it from Python as well:

Again: it’s the return result from the URL that you want to distribute to others and not the initial URL.

To confirm that it worked, you go back to and check your panel with the share link overview:

But you can also just pull up the resulting URL in a new browser window:

Yay, it worked!

Automating folders

You can also create links that point to folders.

That slide that we just referenced? It turns out to be an H&E slide, and along with the other slides in the folder, actually comprises a single patient case.

So you can emulate cases by organizing their slides per folder, with each folder representing a case. Your hierarchy can then be:




Say that we want to offer a case-representation of breast cancer patient 06420637F. We use Share/CreateLinkForFolder and point to a folder instead of a slide:

The result again appears on the side. And clicking on it results in a mini-browser interface:

What’s next

After PMA.core, we’re starting to provide back-end API calls into as well. Even though as we prefer developers to integrate with PMA.UI directly, there are scenarios where automation through PMA.UI makes sense. When you’re in one of the following scenarios when:

  • is your main cockpit interface to work with slide content, but there are a few other routes (like an intranet) through which you want to provide quick access to content, too.
  • You have an (AP)LI(M)S, PACS, VNA, or other system and you’re in a PoC phase to add digital pathology capabilities to your own platform, automation may be a quicker route to go than adapting our SDKs.

Do keep in mind however that we’re providing the back-end mostly for convenience, at least for the time being. There are any number of ways in which you may want to integrate digital pathology in your infrastructure and workflows. For a high level of customization, you’re really going to have to move up to PMA.UI, as well as a back-end counterpart like PMA.python, PMA.php, or

Four ways to identify slides

By filename

The most straightforward way to identify a slide is by its filename.

When you request the slides that are in a subfolder, you get a list of filenames back. Each filename refers to a slide.

By Unique Identifier (UID)

Once you obtain a filename referring to a slide, you typically want to do something with it.

Using filenames as references throughout your software is problematic however, for a variety of reasons:

  • A full path reference can become really long, and may not fit a field. No matter how careful you are, at some point there’s always that 51-character string that just won’t quite fit into the varchar field that was defined with a standard field size of varchar(50)
  • Unicode-encoding can be tricky, and many languages complicate matters further by providing different methods for querystrings, querystring parameters etc. Not to mention the databasefield that you forgot to make nvarchar instead of varchar. Good luck chasing that one!
  • Using filename references is just not safe. Imagine that you’re passing on a URL that looks like lookAtMySlide.jsp?slide=case35%2fslide03.svs… It’s all too easy (or even tempting) for the recipient to want to try out variations on that scheme: “hmm, I wonder what slides 2 and 4 look like?” or “let’s have a look at cases 1-34 too”

For this purpose, we’ve introduced the UID-principle. A UID is a 6-character random string, tied to a particular slide in a particular location (folder). The UIDs are generated by the PMA.core engine, so there’s no collusion possible between UIDs referring to different slides. By their nature, there’s no sequential logic to them either, so there’s no point wondering asking information about slides YT4TGQ or YT4TGS, after finding out slide YT4TGR exists.

You can retrieve the UID of any slide through the GetUID() method. For this, you’ll need an instance of PMA.core, because slide anonymization is not supported by our free PMA.start viewer.

If you’ve had a look at our API calls and SDK methods, you’ll notice that many calls have a parameter argument PathOrUid, rather than just Path. This means that each time you specify a filename to identify a slide, you might as well make life a bit more easy on yourself (as well as possibly your compliance department!) and use the UID parameter instead. Have a look then at the following semantically identical calls:

The one notably exception to this would be the invocation of the GetUID() method itself, of course. But don’t worry; you can’t accidentally request the UID of a UID. If a slide reference passed on doesn’t refer to any existing content, you’ll just get a runtime error instead.

By Fingerprint

UIDs are great, but what if you want to track virtual slides? Physical slides aren’t static and move around; this real-life environment is oftentimes mimicked in the virtual world, where a lifecycle of a slide can go like this:

Different systems may be responsible for the different types of movement, making it very hard to track the virtual slide’s lifecycle in its entirety.

This is where a slide’s fingerprint can come in handy. Unlike a UID, the fingerprint is a signature string that is calculated based on a slide’s actual characteristics. We have a whole separate article on the subject.

The bottom line is that when you have new_slides/slide17.svs, and you move it to validated_slides/slide17.svs, you’ll be able to identify these slides as being identical through their fingerprint signature.

Let’s say that we have a slide slide54123.mrxs in the incoming folder, that get subsequently moved (and renamed) to a folder related to bladder research, to finish its lifecycle in an archival folder.

Have a look at the following code then:

Note that the UID is different for all three slides, but the fingerprint remains the same, even as the filename changes!

And you can use this for even more applications:

If you’re later wondering if new_slides/slide17.svs is a re-scanned slide, or whether it’s just the original slide that somebody forgot to delete, you can also use the fingerprint for this. If it is the old version that is just lingering, it will still have the same fingerprint. However, if it’s a newly re-scanned version of the physical slide, the fingerprint will be different, due to subtle changes in the image capturing process. It’s interesting to note than that in the latter case, the UID will still be the same.

Why would you still use UID instead of a fingerprint signature? B/c a fingerprint takes some time to generate: a slide must actually be (at least partially) and analyzed to obtain its fingerprint. The UID in contrast is only a random string that is generated in a split second. In many cases, all you want is a point to a slide. For a variety of reasons, the UID is then a faster alternative.

By barcode

Virtual slides comes from physical slides, so how did people identify slides before the advent of technology? Well, first they invented the are on the slide that we now refer to as slide label, and coated it with a material they could easily write (typically in pencil). Later on, label printers were introduced, and in combination with bio-repository systems provided by the (AP)LI(M)S, random (bar)codes could now be imprinted on small stickers that could be pasted on the slides directly, so that no scribbling was necessary anymore.

The idea is that the barcodes are machine-readable, and could be used to match all sorts of information afterwards, at the same time guaranteeing anonymity of the slide itself (a barcode identifier only makes sense in the context a particular hospital / lab / (AP)LI(M)S / biorepository).

A barcode in many ways is the real-world equivalent of a UID, but it doesn’t have to be. Consider this:

  • There can be structure to encoded barcodes, most often sequence information
  • The barcode can still encode certain patient or doctor information, though this is rather rare. People aren’t doing this anymore because of practical concerns, as a barcode only holds a limited number of characters.
  • Unlike UIDs, barcodes can take on all sorts of shapes and forms, making it difficult to provide universal identification services
  • You can have more than one barcode on a slide
  • The barcode can still be pasted on manually at an angle, which can make it hard for machines to recognize, whereas a human lab technician in such a case would just rotate her hand-scanner a bit.
  • The barcode can be applied to a slide before it goes through a series of chemical steps, that can in turn degrade the barcode and make it less legible

Even with the above caveats, we don’t argue the value of barcoding per se. It’s definitely way better than the alternative (pencil, scribbling). At the same time, barcoding makes most sense within a setting of one lab (or lab-group), one set of hardware, and one (AP)LI(M)S, so that the entire pipeline can be calibrated to a (beforehand agreed-upon) specific format.

At Pathomation, we offer the possibility to extract barcodes from label images through the GetBarcode(). We have our basic implementation in such a way that it works on a wide variety of labels:

Another way to study this behavior is via the debugger in your webbrowser:

The thing about barcodes is unfortunately: it’s not waterproof. The resolution of your scanner may not be high-quality enough, and there are a number of other reasons it can still go wrong. We’ve seen stupid things happen, like the number “1” being interpreted like the lowercase letter “L”  etcetera.

You can use the existing implementation for test scenarios. However, for production environments, we should be involved in the IQ /OQ / PQ loop, so we can advise properly on how best to roll this out. When you know that your identification scheme only includes numbers, we can configure the recognition engine to not accidentally pick up letters (and prevent a 1 > l switch from ever happening in the first place).

In summary

At Pathomation, we pride ourselves with the slogan “digital pathology for pathologists, by pathologists”. We know of the struggles to identify and keep track of slides, both physically and virtually. Therefore, we offer different ways of identification. We’ve published content on this before, but this is the first article in which we neatly outline all options next to each other.

Virtualize your multi-head microscope

Syncing viewports across different users

Ever wanted to be able to have one user at the driver’s seat while others are watching, each looking at their own screen, just like with multi-head microscopes but over the internet? This can probably be accomplished with a screen sharing & conferencing tool, but the image quality results may be poor. How about allowing all users to take over the viewport at the same time? Drawing annotations simultaneously? Allowing users to share multiple viewports? Have this functionality in your application? It gets more complicated now, right? Wrong. enables exactly this functionality out of the box and can be integrated in your application without a lot of coding or complicated setup procedures. How does it work and why is it more efficient than your traditional screen share? Because tells connected clients which tiles to download, and each client then retrieves said tiles directly from the tile server. This is more efficient and elegant than to broadcast pixels.

The ingredients you need are:

  • PMA.core – Where digital slides come from
  • PMA.UI – the UI Javascript library
  • – Pathomation’s collaboration platform

In the page where we want to add collaboration functionality we need the following JS libraries included:

Let’s start by syncing a single viewport. In the terminology, enabling users to collaborate with each other by looking at the same slides is called a session. Therefore, a user must first create a session which the rest of the participants have to join.

The first step is to establish a connection to the backend:

	pmaCoreUrl: pmaCoreUrl,
	apiUrl: `${collaborationUrl}api/`,
	hubUrl: `${collaborationUrl}signalr`,
	master: isMaster,
	dataChanged: collaborationDataChanged,
	pointerImageSrc: "pointer.png",
	masterPointerImageSrc: "master-pointer.png",
	getSlideLoader: getSlideLoader,
}, [])

The initialize method tells to the Collaboration static object where to find PMA.core, backend, whether or not the current user is the session owner, what icons to show for the users’ and the master’s cursor and accepts a couple of callback functions which we will explain later.

Once has been initialized, we can continue by either creating or joining a session:

Collaboration.joinSession(sessionName, sessionActive, userNickname, everyoneInControl);

The joinSession method will create a session if it does not exist and then join it. If the session doesn’t exist, it is possible to specify whether or not it is currently active and if all users can take control of the synced viewports or only if the session owner can do this. Once the session has been created, only the session owner can modify it and change it’s active status or give control to others.

In order for to be able to sync slides, it has to know the viewports it should work with. In this example, we will first create a slide loader object:

const sl = new PMA.UI.Components.SlideLoader(context, {
				element: slideLoaderElementSelector,
				filename: false,
				barcode: false,

Now let’s tell that we are only going to be syncing a single viewport:


Now let’s go back to the implementation of the “getSlideLoader” callback that we used during the initialization of This function will be called by when it needs to attach to a viewport in order to control it. So the implementation in this example looks like this:

function getSlideLoader(index, totalNumberOfImages) {
	return sl;

We just return the one and only slide loader that we instantiated earlier.

Finally, let’s talk about the “collaborationDataChanged” callback. uses SignalR and the WebSocket protocol to achieve real time communication between participants. Besides the out of the box slide syncing capabilities, it gives you the ability to exchange data between the users, in real time. This could be useful for example if you wanted to implement a chat system on top of the session. Every user can change the session’s data by invoking the Collaboration.setApplicationData method. It accepts a single object that will be shared among users. Whenever this method is called, all other users will receive it through the collaborationDataChanged callback, which looks like this:

function collaborationDataChanged() {
	console.log("Collaboration data changed");

Summing it all up, provides an easy way to enable real time collaboration between users. It takes away the burden, of syncing data and digital slides, from the developer and allows you to focus on the integration of the Pathomation toolbox in your application.

You can find a complete example of here.

Who’s in the driver’s seat?

We offer a platform…

Pathomation is not just about selling software. We offer a comprehensive platform with a variety of technical components that help you build tailor-made digital pathology environments and set up custom workflows.

From simple viewing to automated back-end image analysis and data integration, we believe we have the broadest offering on the market today. Best of all: our technology is centered around PMA.core, a powerful tile server on top of which everything else can be connected and integrated.

Recently, we published a video in which we showcase how one of our customers adopted our components into their own SaaS solution:

The customer offers services for second opinion counseling. It had already built a proprietary workflow portal for patients and pathologists to log in and submit new or evaluate existing data. Until the Pathomation components were integrated however, slide exchange was limited to upload and download mechanisms.

The front-end uses primarily two controls: PMA.UI Gallery and PMA.UI Viewport.

Now that the Pathomation PMA.UI slide visualization framework is integrated in the customer’s codebase, things run a lot smoother: slides are uploaded directly to the customer’s website, visualization is instantly thanks to PMA.UI’s viewport component, and even annotations can be added by various actors throughout a submitted case’s workflow process.

To help customers like Agoko get started and guide them to the process, we have our own developer portal. There, you can find articles and tutorials on how to adapt our technology both on the client (JavaScript) and server-side (Java, Python, PHP).

We’re working on a dedicated YouTube channel for Pathomation software developers, too. Head over there to get the basic skills you need to get started in the respective programming language of your choice.

… in more ways than one

While we have a number of customers that currently integrate our platform into their own online infrastructure, this is not for all. If you already have an application, and you have the technical resources (people) to work on this, that’s great. But what if you’re more limited in terms of time, money, staff… If you’re a startup, every decision has its implications. If you’re an image analysis or algorithm shop for instance, you may invest more heavily into your back end, rather than your front-end presentation (or at least postpone that for a later stage).

But you still want to allow people to upload slides. You still want to be able to allow experts (human or AI) to make annotations, and you still want _your_ end-users to see findings and results.

In that case, may be a more convenient solution, than learning how to integrate individual components. is a web-based slide viewer. In its simplest form, it looks like this:

However, is much more than just a slide viewer. All those integration capabilities that we mentioned earlier in the context of PMA.core can be visualized through various controls and panels. So while you can use as a viewer, it’s more likely you end up with interfaces that look like this: brings modern user interface elements like a ribbon and panel-layout to a browser-based digital pathology environment. For the panels, we use the GoldenLayout library. From the start, we realized that one size will never fit all, so we made the layout of both the ribbon and the panel orientation completely configurable via XML configuration files. You can do this both for panels:

And for the ribbon:

Not only that, but you can also custom create new panels, that retrieve data from your own databases, present workflows etc. This means that our customer might as well have ended up with an interface that looks like this:

Flexibility and choices

The flexibility offered through both our SDKs and allows any customer to easily white-label our various software components.

Which route you decide to go with depends on you, but having had the experience with various customers on different projects, we’d be happy to guide you along the way. Contact us for a free consultation today and see how we can help take your digital pathology infrastructure to the next level.