Working with digital microscopy imaging data using our Python SDK

Representation

PMA.start is a free desktop viewer for whole slide images. In our previous post, we introduced you to pma_python, a novel package that serves as a wrapper-library and helps interface with PMA.start’s back-end API.

The images PMA.start typically deals with are called whole slide images, so how about we show some pixels? As it turns out, this is really easy. Just invoke the show_slide() call. Assuming you have a slide at c:\my_slides\alk_stain.mrxs, we get:

from pma_python import core
slide =  "C:/my_slides/alk_stain.mrxs"
core.show_slide (slide)

The result depends on whether you’re using PMA.start or a full version of PMA.core. If you’re using PMA.start, you’re taken to the desktop viewer:

If you’re using PMA.core, you’re presented with an interface with less frills: the webbrowser is still involved, but nothing more than scaffolding code around a PMA.UI.View.Viewport is offered (which actually allows for more powerful applications):

Associated images

But there’s more to these images; if you only wanted to view a slide, you wouldn’t bother with Python in the first place. So let’s see what else we can get out of these?

Assuming you have a slide at c:\my_slides\alk_stain.mrxs, you can execute the following code to obtain a thumbnail image representing the whole slide:

from pma_python import pma
slide =  "C:/my_slides/alk_stain.mrxs"
thumb = core.get_thumbnail_image(slide)
thumb.show()

But this thumbnail presentation alone doesn’t give you the whole picture. You should know that a physical glass slide usually consists of two parts: the biggest part of the slide contains the specimen of interest and is represented by the thumbnail image. However, near the end, a label is usually pasted on with information about the slide: the stain used, the tissue type, perhaps even the name of the physician. More recently, oftentimes the label has a barcode printed on it, for easy and automated identification of a slide. The label is therefore sometimes also referred to as “barcode”. Because the two terms are used so interchangeably, we decided to support them in both forms, too. This makes it easier to write code that not only syntactically makes sense, but also applies semantically in your work-environment.

A systematic representation of a physical glass slide can then be given as follows:

The pma_python library then has three methods to obtain slide representations, two of which are aliases of one another:

core.get_thumbnail_image() returns the thumbnail image

core.get_label_image() returns the label image

core.get_barcode_image() is an alias for get_label_image

All of the above methods return PIL Image-objects. It actually took some discussion to figure out how to package the data. Since the SDK wraps around an HTTP-based API, we settled on representing pixels through Pillows. Pillows is the successor to the Python Image Library (PIL). The package should be installed for you automatically when you obtained pma_python.

The following code shows all three representations of a slide side by side:

from pma_python import core
slide =  "C:/my_slides/alk_stain.mrxs"
thumb = core.get_thumbnail_image(slide)
thumb.show()
label = core.get_label_image(slide)
label.show()
barcode = core.get_barcode_image(slide)
barcode.show()

The output is as follows:

Note that not all WSI files have label / barcode information in them. In order to determine what kind of associated images there are, you can inspect a SlideInfo dictionary first to see what’s available:

info = core.get_slide_info(slide)
print(info["AssociatedImageTypes"])

AssociatedImageTypes may refer to more than thumbnail or barcode images, depending on the underlying file format. The most common use of this function is to determine whether a barcode is included or not.

You could write your own function to determine whether your slide has a barcode image:

def slide_has_barcode(slide):
    info = core.get_slide_info(slide)
    return "Barcode" in info["AssociatedImageTypes"]

Tiles in PMA.start

We can access individual tiles within the tiled stack using PMA.start, but before we do that we should first look some more at a slide’s metadata.

We can start by making a table of all zoomlevels the tiles per zoomlevel, along with the magnification represented at each zoomlevel:

from pma_python import pma
import pandas as pd

level_infos = []
slide = "C:/my_slides/alk_stain.mrxs"
levels = core.get_zoomlevels_list(slide)
for lvl in levels:
    res_x, res_y = core.get_pixel_dimensions(slide, zoomlevel = lvl)
    tiles_xyz = core.get_number_of_tiles(slide, zoomlevel = lvl)         
    dict = {
        "res_x": round(res_x),
        "res_y": round(res_y),
        "tiles_x": tiles_xyz[0],
        "tiles_y": tiles_xyz[1],
        "approx_mag": core.get_magnification(slide, exact = False, zoomlevel = lvl),
        "exact_mag": core.get_magnification(slide, exact = True, zoomlevel = lvl)
     }
     level_infos.append(dict)

df_levels = pd.DataFrame(level_infos, columns=["res_x", "res_y", "tiles_x", "tiles_y", "approx_mag", "exact_mag"])
print(slide)
print(df_levels)

The result for our alk_stain.mrxs slide looks as follows:

Now that we have an idea of the number of zoomlevels to expect and how many tiles there are at each zoomlevel, we can request an individual tile easily. Let’s say that we wanted to request the middle tile at the middle zoomlevel:

slide = "C:/my_slides/alk_stain.mrxs"
levels = core.get_zoomlevels_list(slide)
lvl = levels[round(len(levels) / 2)]
tiles_xyz = core.get_number_of_tiles(slide, zoomlevel = lvl)         
x = round(tiles_xyz[0] / 2)
y = round(tiles_xyz[1] / 2)
tile = core.get_tile(slide, x = x, y = y, zoomlevel = lvl)
tile.show()

This should pop up a single tile:

.Ok, perhaps not that impressive.

In practice, you’ll typically want to loop over all tiles in a particular zoomlevel. The following code will show all tiles at zoomlevel 1 (increase to max_zoomlevel at your own peril):

tile_sz = core.get_number_of_tiles(slide, zoomlevel = 1) # zoomlevel 1
for xTile in range(0, tile_sz[0]):
    for yTile in range(0, tile_sz[1]):
        tile = core.get_tile(slide, x = xTile, y = yTile, zoomlevel = 1)
        tile.show()

The advantage of this approach is that you have control over the direction in which tiles are processed. You can also process row by row and perhaps print a status update after each row is processed.

However, if all you care about is to process all rows left to right, top to bottom, you can opt for a more condensed approach:

for tile in core.get_tiles(slide, toX = tile_sz[0], toY = tile_sz[1], zoomlevel = 4):
    data = numpy.array(tile)

The body of the for-loop now processes all tiles at zoomlevel 4 one by one and converts them into a numpy array, ready for image processing to occur, e.g. through opencv. But that will have to wait for another post.

We did it! SDK update for Python.

SDK update

Now that I have a basic understanding of Python (including the popular packages pandas, matplotlib.pyplot, and (to a slightly lesser extend) numpy), I’m moving ahead and am putting out Java, R, Eiffel, Haskell, and F# API wrapper libraries as part of our SDK!

Ok, perhaps not.

To prevent ending up with a plethora of half-finished sourcecode files across a variety of languages, we thought it more prudent this time to work on a comprehensive library in a single programming language first, and then port it to other environments.

For Python, this means writing one of more modules and publishing it in Python. Others have done this before us, so how hard could it be, right?

So we set off to write an initial module, with a number of procedures to do basic tasks such as obtaining lists of slides, navigating a hierarchical folder structure, and, of course, extracting tiles. The code was deposited in GitHub.

After GitHub typically comes support for the Python Package Installer (PyPI). We recruited somebody through UpWork to help us with this process and here we are: getting an interface to Pathomation software in Python is now as easy as issuing the command “python3.exe -m pip install pma-python”.

Oh, and we already tested this in other environments, too. The following screenshot was taken on a Mac (thank you, Pieter-Jan Van Dam):

Getting started

What can you do with our Python SDK today? If you have PMA.start installed on your system, you can go right ahead and try out the following code:

from pma_python import pma

if pma.is_lite():
    print("Congratulations; PMA.start is running on your system")
    print("You’re running PMA.core.lite version " + pma.get_version_info())
else:
    print("PMA.start not found. Either you don’t have it installed, or you don’t have the server-component running currently")
    raise Exception("PMA.start not detected")

You can use the same is_lite() method by the way to ask your end-user to make sure PMA.start IS running before continuing the script:

from pma_python import pma
from sys import exit

if (not pma.is_lite()):
    print("PMA.core.lite is NOT running.")
    print("Make sure PMA.core.lite is running and press <enter> to continue")
    input()

if (not pma.is_lite()):
    print("PMA.core.lite is NOT running.")
    exit(1)

Slides

Now that you know how to establish the availability of PMA.start as a back-end for whole slide imaging (WSI) data, you can start looking for slides:

from pma_python import pma

if not pma.is_lite():
    raise Exception("PMA.start not detected")

# assume that you have slides in C:\my_slides (note the capital C)
for slide in pma.get_slides("C:/my_slides"):
    print(slide)

But you knew already that you had slides in that folder, of course. By, the way, if NO data shows up, check the specified path. It’s case sensitive, and drive letters have to be capitalized. Also make sure to use a forward slash instead of the more traditional (on Windows at least) backslash.

Now what you probably didn’t know yet is the dimensions of the slide, both in pixels as well as micrometers.

print("Pixel dimensions of slide:")
xdim_pix, ydim_pix = pma.get_pixel_dimensions(slide)
print(str(xdim_pix) + " x " + str(ydim_pix))

print("Slide surface area represented by image:")
xdim_phys, ydim_phys = pma.get_physical_dimensions(slide)
print(str(xdim_phys) + "µm x " + str(ydim_phys) + "µm = ", end="")
print(str(xdim_phys * ydim_phys / 1E6) + " mm2")

Below is the output on our computer, having 3 3DHistech MRXS slides in the c:\my_slides folder. You can use this type of output as a sanity check, too.

While the numbers in µm seems huge, they start to make more sense once translated to the surface area captured. As a reminder: 1 mm2 = 1,000,000 µm2, which explains why we divide by 1E6 to get the area in mm2. 1020 mm2 still not saying much? Then keep in mind that 100 mm2 equals 1 cm2, and that 10 cm2 can very will constitute a 2 cm x 5 cm piece of tissue. A physical slide’s dimensions are typically 10 cm x 4 cm. Phew, glad the data matches reality!

Determining a slide’s magnification

We can also determine the magnification at which an image was registered. The get_magnification function has a Boolean exact= parameter that works as follows: when set to True, get_magnification will round to the nearest “whole number” magnification that’s typically mentioned on a microscope’s objective lens. This could be 5X, 20X, 40X… But bear in mind that when a microscopist looks through his device, he can fine-focus on a sample, thereby slightly modifying the actual magnification used, perhaps from 40X to 38X (even though the label on the lens still says 40X of course). Scanners work in the same manner; because of auto-focusing, the end-result of a scan may be in 38X instead of 40X, or 21X instead of 20X. And this is the number that is returned when the exact= parameter is set to True.

 

Of course, when building a pandas Dataframe, you might as well include columns for both measurements (perhaps using the rounded measurement later for a classification task):

from pma_python import pma
import pandas as pd

if not pma.is_lite():
    raise Exception("PMA.start not detected")

# create blank list (to be converted into a pandas DataFrame later)
slide_infos = []

# assume that you have slides in C:\my_slides (note the capital C)
for slide in pma.get_slides("C:/my_slides"):
    dict = {
        "slide": pma.get_slide_file_name(slide),
        "approx_mag": pma.get_magnification(slide, exact=False),
        "exact_mag": pma.get_magnification(slide, exact=True),
        "is_fluo": pma.is_fluorescent(slide),
        "is_zstack": pma.is_z_stack(slide)
        }
    slide_infos.append(dict)

df_slides = pd.DataFrame(slide_infos, columns=["slide","approx_mag","exact_mag", "is_fluo", "is_zstack"])
print(df_slides)

The output of this script on our computer is as follows:

Note that for one slide, both the exact and the approximate magnification is 0. This is because that particular slide is a .jpg-file, which doesn’t contain any useful (pixels per micron) metadata to use to determine the magnification.

Almost there

In our next post, we’ll show how you can retrieve various types of image data from your digitized slides.

E-learning update

Remember my training goals for 2018? I think I’m doing okay. I completed the Excel XSeries at edx. When completing the XSeries, you get a comprehensive dedicated certificates for completing the full track; individual certificates are still available on a per-course basis, too.

Now that I’m done with this, I can definitely recommend an XSeries track at edx, because across the different courses, various aspects on a subject are really approached from different angles. The repetition you get over time is hugely beneficial to get a solid grasp on that subject.

Moving on then.

Since all ML-courses at one point or another converge to Python (including Ng’s new Deep Learning curriculum at Coursera), I temporarily switched to Datacamp’s Python track. This morning, I finished the Python Programmer track (consisting of 10 individual courses).

In the next two blog posts, I’ll talk more about how I’m putting my newly learned Python skills to work. Stay tuned.

About APIs and SDKs

API vs. SDK

Early on in the life of Pathomation, it became clear that in order to tackle the variety of use cases out there for digital pathology, we needed to build of piece of veritable digital chameleon software.

Luckily there is a way to do exactly that in engineering, and that is through the establishment of an Application Programming Interface (or API for short). It all starts with an API.

Hugo Bowne-Anderson of Datacamp explains what that means: An API is a set of protocols and routines for building and interacting with software applications.

Why are APIs important? An API is a bunch of code that allows two software programs to communicate with each other. You can connect to an API, pull data from them, and subsequently parse that data.

Using APIs has become the standard way to interact with applications ranging from Wikipedia to Twitter. PMA.core (and its little brother PMA.start) has an API, and our own product suite makes heavy use of it.

So while PMA.core isn’t our main product, it is definitely where everything starts, and today I see how many of our success stories can be attributed to the API that PMA.core offers.

Version 2

Currently we’re in the works of wrapping up version 2 of PMA.core. As it should be, we learned a lot in the last couple of years about how (not) to use our own interfaces.

Some good things:

Some not so good things:

  • We limited ourselves in terms of granularity. Here’s an example: PMA.core offers the possibility to store slide meta-data in an audit-trailed, 21CFR part 11-compliant manner. However, the present interface only allows to get data for one slide, one form at the time. So when you work on courseware projects like PathoTrainer, you need to put in additional progress bars, while the underlying data is retrieved in an atomic fashion. In contrast, of course, you’d much rather be able to retrieve the n metadata sets for m slides in a single call. In version 2 we will be able to do just that, and more.
  • We need much more documentation and more sample code. Pointing people to your WSDL manifest is NOT sufficient.

SDK

We’ve always had the idea to bring out a complete Software Development Kit (or SDK for short). The idea for an SDK is simple enough: take everything your API can do and build content every call and every combination of parameters imaginable.

But what exactly do you put in it? What languages do you support? Looking around our own environment, we chose Microsoft.Net, Java and PHP as prime candidates. We know there are more of course, but you have to pick your battles.

We thought we were doing pretty well, having Java and Microsoft.Net desktop application sample code, until one partner told us they were using Delphi. What’s next? Windev https://fr.wikipedia.org/wiki/WinDev?

Two problems then exist with the SDK the way we currently have it:

  • When we make a matrix with documented features and supported languages, we end up with a rather sparse matrix. This means that some things are documented in one language, but not in another. There are a couple of essential tasks that we have in any environment of course, like establishing a connection, but it’s inconsistent at best. We thought actually that we could do this demand-driven, meaning that when someone asks us how to do task t in language l, we just fill up cell (t, l) in the foresaid matrix. This works, to some extend; you can respond to the client, and be confident that the code you contribute is actually something application develops want. But the end-result is messy and doesn’t look very professional.
  • Our code examples that we’ve been adding were mostly wrappers around API calls. In Microsoft.Net, we’d a WebRequest call and interpret the returned JSON or XML stream. In PHP, we’d do a file_get_contents of whatever API call we needed to get the job done, and again interpret the results. This got the job done, but as a result much of the code that we delivered was more a tutorial in how to read webcontent and interpret the returned structured data. Ideally however, these should focus more on what can  actually be done with the software (instead of how to do something).

Wrapper libraries

For the current version 2 then, I want to be more up-front with our SDK offering. I want to be better prepared. I don’t think we should try to convince people anymore to “just” go and use our API. It’s too abstract, too inconsistent even at places (sometimes for historical reason; who thought you could grow legacy so quickly?), and frankly too steep a learning curve probably for a great many people.

Pathomation offers a platform for the development of digital pathology software. I want to make sure people like to stand on our platform to begin with. It’s not necessarily what I want to do w/ it… it’s what others would want to with it.

We have already have code in python that fetches images from PMA.core and does “something” with them. But there’s nothing fancy about your sample Python script; you DL a /region URL, then you process it like any other image. This is at API level.

A wrapper library can now add more functionality; repetitive basic tasks. What do developers not like to do? Write plumbing code. We can encapsulate all of that under the hood of a PyPI package or Java namespace. We’re already doing that for the front-end handling and representation of whole slide imaging content with our Javascript PMA.UI framework.

Here’s another idea: I can’t do a nuclear cell count on an (reduced resolution) overview image; I need at least 20x resolution for that. But I can do a cell count on an individual tile at high resolution, and then put on a dot on the overview image to at least indicate where the cell was found. What I would want to do instead is create an overview image of e.g. 1000 x 2000 pixels, then loop through x * y tiles at a zoomlevel that in reality represents 5000 x 10000 pixels (but which is too bulky to process in one time via /regio; we’re not VIPS); process the individual tiles, scale and imprint the result from each  tile back onto the 1000 x 2000 pixel overview image.

I can imagine a class representing a slide that has indexer logic that retrieves tiles in real time (sort of like a programmer’s server-side version of PMA.UI), but then destroys them again once the object is not needed anymore (so the memory usage doesn’t explode).

from pathomation import pma

dir = pma.get_first_non_empty_directory()
slide = pma.get_slides(dir)[0]

max_zoomlevel = pma.get_max_zoomlevel(slide)
print ("Max. Zoomlevel: " + str(max_zoomlevel))
print ("Size in pixels: " + str(pma.get_pixel_dimensions(slide)))
print ("Resolution (PPM): " + str(pma.get_pixels_per_micrometer(slide)))
print ("Physical resolution (µm x µm): " + str(pma.get_physical_dimensions(slide)))
print ("Number of channels: " + str(pma.get_number_of_channels(slide)))
print ("Slide is fluorescent? " + str(pma.is_fluorescent(slide)))
print ("Number of tiles: " + str(pma.get_number_of_tiles(slide)))

selectedZl = 1 # do the following on zoomlevel 1 for demo purposes
tileSz = pma.get_number_of_tiles(slide, selectedZl) # zoomlevel 1
for tile in pma.get_tiles(slide, toX = tileSz[0], toY = tileSz[1], zoomlevel = selectedZl):
     tile.show()
     # do something with the tile

In closing

There’s a tremendous problem today with scientific software: published methods are described at a very high level; and not at all easy to replicate. And when they are detailed enough, or source code is made available, it’s not in an accessible language like Python (or Java for that matter; or it has sooooo many dependencies…). Too many research papers can be concluded with “now good luck finding the one former postdoc that actually knew how to do this…”

Remember “Developers developers developers”? Yes, make fun of Steve Balmer all you want, but he got it right. You offer people basic services, and then get those people to develop software on top of your infrastructure.

So how do we intend on addressing this? By continuing to make our own software as versatile and kick-ass as possible (duh 😊), but also by going just one step further and reaching out to the legion of developers and researchers out there that currently still just have to make do with what they can get their hands on. We claim to have a better mousetrap for you, and we’ll prove it to you.

How to test and benchmark a tile server?

As mentioned before, I’m the Chief Technology Officer of Pathomation. Pathomation offers a platform of software components for digital pathology. We have a YouTube video that explains the whole thing.

You can try a local desktop-bound (some say “chained”) of our software at http://free.pathomation.com. People tell us our performance is pretty good, which is always nice to hear. The problem is: can we objectively “prove” that we’re fast, too?

The core components of our component suite is aptly called “PMA.core” (we’re developers, not creative namegivers obviously). Conceptually, PMA.core a slide tile server. Simply put, a tile server serves up data in regularised square shaped portions called tiles. In the case of PMA.core, tiles are extracted in real time on an as-needed basis from selected whole slide images.  

So how do you then test tile extraction performance?

At present, I can see three different ways:

  1. On a systematic basis, going through all hypothetical tiles one by one, averaging the time it takes to render each one.
  2. On a random basis
  3. Based on a historical trail of already heavily viewed images.

Each of these methods have their pros and cons, and it depends on what kind of property of the tile server you want to test in the first place.

Systematic testing

The pseudo-code for this one is straightforward:

For x in (0..max_number_of_horizontal_tiles):
  For y in (0..max_number_of_vertical_tiles)
    Extract tile at position (x, y)

However, we’re talking about whole slide image files here, which have more than just horizontal and vertical dimensions. Images are organized as a hierarchical, pyramid-structured stack, and can also contain z-levels, fluorescent layers, or even timelapse data. So the complete loop for systematic testing goes more like this:

For t in (0..max_timeframes):
  For z in (0..max_z_stacks):
    For l in (0..max_zoomlevels):
      For c in (0..max_channels):
        For x in (0..max_number_of_horizontal_tiles):
          For y in (0..max_number_of_vertical_tiles):
            Extract tile at timeframe t, z-stack z, zoomlevel l, channel c, position (x, y)

But that’s just nested looping; nothing fancy about this, really. We’ve been using this method of testing for as long as we can remember pretty much, and even wrapped our own internal tool around this, (again very aptly) called the profiler.

What’s good about this systematic tile test extraction method?

  • Easy to understand
  • Complete coverage; gives an accurate impression of what effort is needed to re construct the entire slide
  • Comparison between file formats (as long as they have similar zoomlevels, z-stacks, channels etc.) allow for benchmarking

What’s bad about this extraction method?

  • It’s unrealistic. Users never navigate through a slide tile by tile.
  • Considering the ratio of the data being extracted from different dimensions that can occur in a slide, you end up over-sampling some dimensions, while under-sampling others. Again this results in a number that, while accurate, is purely hypothetical, and doesn’t do a good job at illustrating the end-user’s experience.
  • In reality, end-users are only presented with a small percentage of the complete “universe” of tiles present in a slide. Ironically, the least interesting tiles will take the smallest amount of effort to send back (especially in terms of bandwidth, like “blank” tiles containing mostly whitespace on a slide or lumens within a specimen etc.)

Random testing

In random testing, we extract a pre-determined (either fixed number or percentage of total number of total available tiles). The pseudo-code is as follows:

Let n = predetermined number of (random) tiles that we want to extract
For i in (0.. n):
  Let t = random (0..max_timeframes)
  Let z = random (0..max_z_stacks)
  Let l = random (0..max_zoomlevels)
  Let c = random (0..max_channels)
  Let x = random (0..max_number_of_horizontal_tiles)
  Let y = random (0..max_number_of_vertical_tiles)
  Extract tile at timeframe t, z-stack z, zoomlevel l, channel c, position (x, y)

The same statistics can be reported back as with systematic testing, in addition to some coverage parameters (based what percentage of total tiles were retrieved).

Let’s look at some of the pros and cons of this one.

Here are the pros:

  • Faster than systematic sampling (see also the “one in ten rule” commonly used in statistics: https://en.wikipedia.org/wiki/One_in_ten_rule)
  • For deeper zoomlevels that have sufficient data, a more homogenous sampling can be performed (whereas systematic sampling can oversample the deeper zoomlevels, as each deeper zoomlevel contains 4 times more tiles).
  • Certain features in the underlying file format (such as storing neighboring tiles close together) that may unjustly boost the results in systematic sampling are less likely to affect results here.

What about the cons?

  • Smaller coverage may require bootstrapping to get satisfying aggregate results.
  • Random sampling is still unrealistic. Neighboring tiles have less chance of being selected in sequence, while in reality of course any field of view presented to an end-user is the result of compound neighboring tiles
  • Less reliable to compare one file format to the next, as this may again require bootstrapping.

Historic re-sampling

A third method can be devised based on historic trace information for one particular file. A file that’s included in a teaching collection and that’s been online for a while, has been viewed by hundreds or even thousands of users. We found in some of our longer running projects (like at http://histology.vub.ac.be or http://pathology.vub.ac.be) that students under such conditions typically are presented with the same tiles over and over again. This means that for a given slide that is fairly often explored, we can reconstruct the order in which the tiles for that particular slide are being served to the end-user, and that trace can be replicated in a testing scenario.

In terms of replication, this is then the most accurate way of testing. Apart from that, other advantages exist:

  • This is the best way to measure performance differences across different types of storage media. If for some reason a particular storage medium introduces a performance penalty because of its properties, this is the only reliable way to determine whether that penalty actually matters for whole slide image viewing.
  • For large enough numbers (entries in the historical tracing logs), a “natural” mixture of different tiles in different zoomlevels, channels, and z-stacks will be present. This sequence of tiles presented in the trace history automatically reflects how real users navigate a slide.

However, this method, too, has its flaws:

  • This type of testing and measuring cannot be used until a slide has actually been online for a certain time period and browsed by a large number of end-users.
  • Test results may be affected by the type of user that navigates the slide: we shouldn’t compare historical information about a slide browsed by seasoned pathologists with how novice med school students navigate a different slide. Apples and oranges, you know.
  • Because each slide has its own trace, it become really hard to compare performance between different file formats.
  • Setting up this type of test requires, of course, historical trace information. This means that this test is the most time consuming to set up: IIS logfiles have to be parsed, tile requests have to be singled out, matched to the right whole slide image etc.

Preliminary conclusions

This section came out of discussing the various strategies with Angelos Pappas, one of our software engineers.

The current profiler that we use was built to do the following:

  1. Compare the performance impact of code modifications in PMA.core. For example by changing around a parser class, or by modifying the flow in the core rendering system etc… We needed a way to relatively compare what’s the difference between versions.
  2. Compare the performance when rendering different slide formats. To do this, you need similar slides (dimensions, encoding method and of course pixel contents), stored in different formats. The “CMU-{N}” slides from OpenSlide are a good case, as well as the ones we bring back ourselves from various digital pathology events. This again, allows us to do relative comparisons that will give us hints about why a format is slower than another. Is it our parser that needs improvement? Is it the nature of the format? etc.
  3. Compare the performance of different storage sources, like local storage versus SMB.

The profiler does all of the above nicely and it’s the only way we have to do such measurements. And even though the profiler supports a “random” mode, we hardly ever use it. Pathomation test engineers usually let the profiler run up to a specific percentage or for a specific period and compare the results.

Eventually what you want to accomplish with all this is to get an objective measurements for user experiences. The profiler wasn’t really meant to measure how good the user experience will be. This is a much more complicated matter, as it involves patterns that are very hard to emulate, network issues, etc etc. For example, if a user zooms into a region, the browser fires simultaneous requests for neighboring tiles. If you ever want to do this kind of measurements, perhaps your best bet would be to do this by commanding a browser. Again though, your measurements would give you a relative comparison.

Backup and reboot

A new year starts with good intentions. For me, the new year 2018 in part started with the realization that it’s been forever that I’ve written anything on the RealData blog.

I like writing. I like passing on knowledge. So the most pertinent question to ask then perhaps is: why did I stop?

Let’s start with the obvious one: lack of time. Getting content out in the right format is hard. It’s one thing to jot down some notes in a diary (a jupyter notebook perhaps); it’s quite another to deliver a publishable readable blog-entry.

And after all: what’s the point, right? Who reads this anyway? You? Why? Do I even have anything to tell you?

I think my journey is still worth sharing. So let’s look at some of the things that went wrong and contributed to my preliminary failure:

I slacked off on the online courses I was planning on taking back in April 2017. Perhaps even worse: I wasn’t passing anymore. At least not as easily anymore as I did in the beginning. Why was that?

I’m now convinced that taking online courses is an art in itself. It’s really easy to go to edx.org, or Coursera, or Datacamp, or any other platform and have the intention of “Let’s take every data science course there is and become a data scientist”. They all look so interesting, right?

I remember passing the Datacamp course Introduction to Python for data science (aka “Python for beginners”) in two days, with a 99% final score.

Woot! I know Python (or so I thought)

I started struggling really bad taking the advanced Programming with Python for data science course. Again: why? Because anyone who’s been programming for over 25 years can probably pass any beginning programming course in any language. I was forgetting that I’m pretty proficient in C# today because I have been using it ever since the very first European .Net conference in Copenhagen, Denkmark in 2001.

In order to become a proficient Python programmer, I have to start using the language myself in daily tasks. An obvious one, I’m afraid, but saying it is easier than doing it. There’s a level of humility attached to this: it’s hard for me to spend half an hour trying to figure out how to process something as trivial as a text file in Python, when I *know* I can write the same script in PHP in 5 minutes. Hey, I can probably do it with a console application in C#, too!

A few years ago, I was mentoring someone into software development. The person was highly educated, had some scripting experience with Excel VBA, and seemed ready for the next level. At one point I was looking something up for her on StackOverflow when she commented “oh, so programming is just copy/paste, right?”. Well, yes and no. Mostly no, of course. But websites like StackOverflow can make it look that way. And sometimes we fool ourselves into thinking that it really has become that easy. It hasn’t

At around the same time that I was flunking my advanced Python curriculum, software testing was rapidly become a top-priority at Pathomation. Huzzah, edx.org also appeared to have a “micromasters” track in software testing. This track consisted of only 3 courses, which seemed more manageable than the entire data science track.

I took the first course and passed. I should point out that these are academically organized courses, considered to be graduate level, and passing them is actually somewhat tough: you have to get an 80% in order to get the certificate.

Unfortunately, the testing curriculum went the same way as the data science curriculum I signed up. I passed the first course; but slacked off halfway during the second course. There’s only so much coursework your brain can process in any given timeperiod, too.

Half full or half empty? I passed the first course, but never got around to taking the other two

They say that good intentions have a higher success rate of being realized when you write them down. So, here’s my intention for 2018: I’ve backed up a little and given it some consideration of why I failed in my attempts last year.

So let’s take a reboot now and set out to complete the following courses and education tracks in 2018:

Curious to see if I’ll make it this time? Keep following this blog, then!

 

A first project

The edX track that I’m currently following is sponsored by Microsoft. No particular reason. There are many distance learning options out there, and the combination of MIT, Harvard, and Microsoft seemed like reasonable credentials.

But courses are only as good as to the extent that you can apply them. So let’s see if I actually learned something useful so far.

The start-up which I’m involved in, Pathomation, has developed software for digital microscopy (and by extension also pathology and histology). This software is used at the Free Brussels University (VUB).

As part of my job as the VUB’s digital pathology manager, we built an interactive web-portal portal for their diabetes biobank.

Long story short: now that we have (some of the) data online, we want to go further and integrate with the back-end.

The thesis is simple: for workflow and sample tracking, the laboratory uses SLims (a Biobank Information Management Systems) by Genohm. It is on this end that Pathomation now also wants to offer slide visualization services.

In order to do that, Genohm has asked for a test-set. The test-set should contain sample information, in addition to what slides are associated with each sample. The sample information is contained within a Microsoft SQLServer database, and the slides are hosted within Pathomation’s PMA.core.

A few weeks ago, I would have messed around with Microsoft Excel (or Open Office Calc), but I recently learned about the really cool “data merge” capabilities in Azure ML Studio.

So the plan consists of three steps:

Here’s the SQLServer query:

Which after exporting to text still leads a bit of tweaking (SQLServer apparently didn’t include the column names):

And here’s the PHP script that retrieves the hosted slides from PMA.core:

Next, I uploaded both outputs as new datasets into Azure ML Studio (make sure that you use the “,” (comma) as a separator character; Azure ML only operates on CSV files that a (true to their name) COMMA separated):

And then I created an experiment that joins both datasets:

Based on a common Sample identifier:

The final step is to extract the new combined dataset:

Note that the final field is now a delimited list in its own right (but those were the customer requirements: a single CSV file with one row representing one sample, and a single field in which all (many) slide references were contained).

And there you have it. Admittedly this is not the fanciest data science project ever done, but it still shows how you can very elegantly solve everyday problems. Problems that only a few weeks ago would have been much more complicated for me to solve (What scripting language to use? Use a database or spreadsheet? Etc.)

By the way, it is not my intent to tout Microsoft technology in this blog. I could have looked for another track. I could have gone for Datacamp as an alternative. But my problem with Datacamp is that it seems to have their starting point somewhat wrong: it’s very programming language focused. You either decide you want to do data science with R, or with Python, and that’s it. And once you’re in that track, you’re stuck with the libraries they’re offering you on that platform in that language. Rather myopic, if you ask me. Plus I’ve always never liked R, and don’t know much about Python besides the few scripts that I had the developers at Pathomation write for me. I know one thing: throughout my career I’ve written code in any number of programming languages now, and I find it best to pick the language based on the problem you’re trying to solve, not the other way around. But I digress…

Regardless; I’m really excited about Azure ML Studio. It’s not only a way to do machine learning, but appears to be a versatile toolkit for data handling.

So, do you agree with my approach? Or did I overcomplicate things? Would you have done this differently? Leave a comment below.

 

Welcome to a new blog

How do you start a blog? By introducing yourself. So: My name is Yves Sucaet, and I’m the Chief Technology Officer of Pathomation. As the top tech guy, one enjoys certain liberties. One of those, is using corporate resources to host my own blog.

Why a blog? Why now? I’ve thought about it many times before certainly. Now seems a particularly good time however, as I find myself about to embark on a new journey: data science.

Why data science? Because it is hot? Because it is cool? Because it is hype? All of the above, probably. Yes, data science is undoubtedly a hot field, and as a software tech company’s CTO, it’s only my duty to go where the next big opportunities lay. Is data science cool? I’m not the one to say. Depends on your definition of cool, I guess. Is it a hype? Most definitely. But that need not be a bad thing necessarily. It merely means that there’s somewhat more fog out there than usual to cut to the core of a topic and understand it.

I like to think that I come into this unbiased. Also keep in mind that I cannot absorb all knowledge about data science at once; what you’ll read here will eventually be somewhat biased, as it will be influenced by the tools I pick up along the way (more about that in a subsequent post).

I had tried picking up the subject before, but honestly failed miserably. Because of the hype. Because of the clutter. Because of the learning curve of some of the tools out there. Because it’s not fun trying to wrestle yourself through a 300-or-so page book, only to find a list of 3000 other references you should REALLY read to get more background.

What’s changed? I think I’ve finally come across a platform that can help me on my way. I’ve typically been skeptical about online learning (University of Phoenix, Trump University…), but edx.org seemed to be one checking out. It’s backed by some major names in education, including Harvard and MIT. Combine that with a specific “track” (and not just random incoherent tutorials about the next great framework) to get you on your way, and I decided to give it a try.

One of the first things discussed in the course is reaching out. To talk about what you do and how you do it. So this is blog my attempt of contributing to the community, and to take you with my on my path so perhaps we can learn together.

So what else is there to know besides the fact that I’m looking into data science to expand our company’s reach and renew my own personal skillset? I originally trained as a bioinformatician. I completed my graduate work at Iowa State University in 2010. As part of that curriculum, I took a machine learning course in 2007. That was fun, but hard, and tedious. I went through the statistics. I played with the Weka toolkit. As a project for the course, I remember working on k-mer pattern clustering in amino acid sequence strings. The algorithm worked, but one had to do such an extraordinary amount of work, that I didn’t find to subject interesting enough to continue with. I understand that the people that did continue into this field, eventually put their heads together and built the easily configurable back-ends that we have today. Throw in some HCI-people and you get user-friendly interfaces to control those back-ends, too.

Machine learning anno 2007.

Machine learning anno 2017.

Here’s the bottom line: I don’t come into this blindly. I don’t come into this naively either, I believe. I’ve certainly got baggage. Good baggage, I think. And I’m able and willing to travel. And I’m looking forward to meeting new people on my journeys. So leave a comment if you feel so.