Video Script: Cell Data Bank

From NoskeWiki
Jump to navigation Jump to search

Click here to watch the video

Title: ....... Cell Data Bank
Length: ..... 11 min 0 secs
Credits: .... Andrew Noske, Stephen Larson

This narrated video was done on a request from Mark Ellisman. It can only be shown on request -

Executive Summary

This page represents the script I wrote for two movie which demonstrates on online multi-scale visualization tool for cells and organisms, married to a wiki environment for quantitative biological information. In this interface people could dock 3D image/models of cells into the (approximately) correct location within an organ via a webGL interface, and below that they could enter published quantitative information about each cell and organ type into a modified MediaWiki.

This movie is also very related to the Whole Brain Catalog project at the National Center for Microscopy and Imaging Research at the University of California, San Diego.

Script #1: The Proposed Interface for Cell Data Bank

1) Proposed Interface

The interface we propose for Cell Database has two main components, a webGL interface up top in the style of Google Image Browser, and below this is text content, in the style of a MediaWiki. Ideally we could also resize or hide one of the other so that we could make either full screen.

On the first page you'll notice we can pick between a few different model organisms which would be slowly added over time. Let's say we're interested in mouse. 0:300:30

2) At the Organism Level

On clicking an organism, it begins to load the model. Notice also that the page below has changed and gives us information about this organism. Up top, is basic information, telling us about the average size and weight of a mouse. And below that we have dataset information, which will tell us information about this specific model, and how it was acquired.

Building on the Google/Zygote Body browser, now an open source project called 3D viewer, the slider on the left provides and easy way to fade out the different systems in the mouse so we can view its anatomy. If I'm interested in a particular compartment, I can click it, and I see these important options. The first is an option to load a more detailed model of the brain. Below it, "load in place" will also load the more detailed model, but do so in place and calculate size would tell us the size of this particular brain mesh. Let's load the brain in a new coordinate system. 1:001:30

3) At the Organ Level

Here's the brain, and we see the slider now reflects the different parts of the brain which we may want to show and hide. In actual fact there are hundreds of parts to a brain, so if we want more fine grain control we can click here to shift to a list view.

In this view we can individually select, show and hide individual compartments or groups of compartments. But below this, you'll notice a list of cells. This represents a list of all cells you'd expect to see in the brain, but the ones at the top in black are linked to 3D models. Notice the little purple dots represent our cells, but since we can never pinpoint these precisely, the yellow sphere represents an area of deviation, and also helps us see where the cells are. Again, we can select any compartment we chose, if we wanted to add our own cell to that compartment. Positioning cells is very difficult in 3D however, which is why we reuse a familiar model from Google Maps, where we can change between different views. Currently we are on Model, but lets switch to slices. Notice we can view the top, side or front view - words which make a lot more sense to the average person that transverse, sagittal and coronal. In the slice view we can see slices of the raw image data on which each model is based... and here we can help identify coordinates and the approximate location where a cell comes from. If we wanted to scroll up and down slices we could use the up/down arrows or this slider here. Meanwhile, the Hybrid view will show us our last visited slice in the context of the model, and here we see an parallel (orthogonal) view, which allows us to better measure distances between any two arbitrary points - something which is very tricky in the normal perspective view.

Let's actually look at a cell now. Let's select then zoom up on the optic chiasm, and there happen to be two cells here. Currently the cells are represented as oversized spheres but if we click, then we can load one. Notice here we can see its unique model ID and also its rotation relative to the brain. Keep in mind that the rotation may be roughly know for some cells but in many cases, its rotation will be completely unknown, so that's something else we want to reflect in metadata. 2:304:00

4) At the Cell Level

Here we load a cell at its native rotation and we can see the compartments which have been segmented and our purple plasma membrane default to semi-transparent. In this list the little blue mesh icon represents a mesh, while the expand arrow means a group of meshes, so here we can expand mitochondria and actually see that there's a bunch of mitochondria, each with a unique number.

Something I want you to notice here is that the cell, organ and organism level are all very similar, just that the scale is different. Just like before however, we have an option here to look at the raw data.

Let's flip back the the easier slider view. Different compartments will be automatically matched to different layers so that they can fade in order, and how we can peel back the plasma membrane, mitochondria and so on.

This interface thus can provide a very nice way for researchers to show off their cell models... but don't forget we also want to make this program more useful than just a visualization tool. As such, we can actually select and right click out group of mitochondria to graph the length, or calculate the size. Let's click calculate size and here can can see the number of mitochondria, plus their total size, surface area and so on.

If we wanted, we could also click an individual mitochondrion to work out its size.

This information is starting to get more useful, but what makes this interface really useful is that below this, people have been allowed to enter information about an average mouse dentate gyrus neuron. This page here represents a wild-type mouse and the way it works is that all these fields start as red and unknown, but people who visit can fill in the blanks, and provide at least one citation for each piece of information. As someone who studies pancreatic beta cells, I may have no idea about the size of synaptic vesicles in this an dentate gyrus neuron, but there are scientists out these who will actually study and specialize in the size of these vesicles, and so over time we can help populate this information and see where our gaps in knowledge still lie. Here we can see that someone has entered an average volume for mitochondria, and if we want we could chase up the literature and see how stereology was used to calculate this. In the case of brian cells, it's actually very difficult to calculate total volumes because they have such long axons and large complexes of dendrite. Indeed if we read about this dataset info, or even look closely at this model, we'll see that although the main body or soma of this neuron is captured, we'll often be missing a lot off to the sides. Ideally a cell is captured in its entirety, so that it can be fully analyzed, but in the case of neuron's it's really not very possible. 3:007:00

5) Additional Features

Let's zoom out now and you'll notice that if I zoom out far enough, it presents a link to go back to the next level up. Also I could actually click any of the breadcrumb links below if I wanted to jump back to the mouse or back to the home screen. Let's return to our Mouse now, and then lets type in the word "pancreas". The Open viewer has this very neat interface where you can jump to any object and even make use of synonyms - useful for objects like a cell's plasma membrane which may also be known as a cell membrane.

Just to be different, lets click "Load in place" so that we can view the more detailed pancreas model while still seeing the mouse. There aren't many parts to a pancreas, but we do have a couple of cells here, and also a little bit of a color scheme whereby these purple cells represent wildtype.... but the red ones may represent diseased cells.

Let's load this pancreatic beta cell. This is a wildtype pancreatic beta cell which is captured in its entirety at high resolution... and once again, we can peel away layers, and we can see what type of compartments have been segmented.

Notice here I've shown lots of unknowns, but in actual fact there are hundreds of studies and hundreds of experts who will know these pieces of information. People study all types of different cells, and not just mouse, cells, but organisms like eColi, zebrafish, monkey, plant cells, drosophila and also plant cells.

Here I wanted to demonstrate something something a bit fancy, and that's the idea of comparing models. The crudest way to compare models would be to open two separate browser windows, but here I wanted to talk about the possibility of a split webGL window. Here I grab the separator on the right, and drag it across, and I can see two views. Now let me zoom out on the left, back to the pancreas, and open a different cell. Now suddenly I can view these side by side. Currently whatever I do on one side is independent of the other, but what I really want to do is view these two cell at the same scale, so I click this, lock scale icon, and it want to switch me to an orthographic view, which is fine. Now when I zoom in and out on one, the other stays at the same scale. We also have an icon here to lock actions, so with this clicked actions like changing rotation or opacity are also locked between the two cells.

Here I've demonstrated the comparing of two pancreatic cells, but one of the neat features of this, is that is makes it easy to compare the size and structure of different cell types, or perhaps even the difference in anatomy between two different animals, or the same cell from two different animals. From an

Some of the final buttons we can see in this interface, is this once, to flip the left and right.

This button can show a little information about the model itself. This button brings up some basic help on what mouse button does what. This button is for settings.

Also it would be nice to have a button to bookmark or show someone exactly what you're looking at.

Other useful buttons are this one to help us either upload or download a model. Downloading a model is self explanatory enough, and would come down as an OBJ or VRML form. To add an object, it should be required that it is first uploaded to a database, including both the model and raw image data, so that we know the cell is real. Also you'd want to make sure all the other meta data like specimen type, condition, age and so on is all recorded prior to finding where to dock it into this framework. 4:0011:00

6) Conclusion

The interface we've show here represents a way we could combine 3D anatomy models such as those from Whole Brain Catalog, with a crowd-sourced science wiki such Neurolex. By combining the two together, we get the best of both worlds.

The webGL interface itself will be based on open 3D viewer, allowing easy visualization and peeling back of layers.... however we hope will be much more than than, because it also allows you to view raw image slices, dock cells, and query the size of compartments.

Meanwhile the wiki interface will help fill a void in cellular biology whereby it can take a biologists many many hours of browsing literature to find quantitative information which another biologists probably already knows, such as the average size or number of sub-cellular compartments in a particular cell type - something which serves as a useful reference - even in cases where no-one has yet acquired a 3D image.

As a project, this is very ambitious and not one we can currently support. If you're interested in helping in any way please contact us immediately.

To help inspire you, PhD student Jeff Bush has provide an example of displaying one of my cells in the viewer.


7) Acknowledgement

Before I go, I should thank my colleagues at the National Center for Microscopy and Imaging Research at the University of California, San Diego. This video has been a demonstration of how we might benefit by combining the two our existing systems into the one interface. The first of these system the Whole Brain Catalog, was developed by Stephen Larson and his team below - and represents a way to navigate the brain with more sophisticated options to show and hide layers, as well as dock cells... similar to the method you saw in this video. The other system Neurolex was developed by Maryann Martone, Stephen and several other developers from the Neuroscience Information Framework, and represent a wiki where anyone can build onto existing ontologies by entering their own references and ontology terms inside a modified MediaWiki framework, similar to the wiki simulated in this video.

Finally, thanks to Mark Ellisman who runs the National Center for Microscopy and Imaging Research, and to Jeff Bush for demonstrating the use of open 3D viewer on cells.


NOTE: If time permits I may try to develop a second video explaining the motivation for such a system, including importance of making 3D cell models public, the importance of spatially docking models, and need for coordinate system of organelles.

SCRIPT 2: The Motivation Behind Cell Data Bank=

1) Introduction

Hi, I'm Andrew Noske, here to discuss the idea of Cell Databank. A webGL website for cataloging anatomy models and cells.

Later in the video I'll demonstrate how such a site could work and help us query quantitate information. But first I should explain the motivation. 0:200:20

2) Motivation

In the field of molecular biology, when a 3D protein structure is solved and published, most journals require them to upload the new structure on a centralized site like the protein databank ( Here people can browse through all solved structures and can download or view the 3D model as a 3D PDB file for analysis.

In the field of cellular biology, when a 3D models or images of cells are published, there is rarely any requirement to upload their data anywhere. There are few website, such as the Cell Centered Database and Cell Image Library, which allow the upload of cellular images and models from light and electron microscopy.... but it's always a battle to convince biologists to publish their models, and so the only part of their data available to other scientists is a few pixelated figures in the publication itself. The original data rarely enters public domain and eventually disappears.

Our motivation of this idea is to help create a system that motivates these biologists to share their data and knowledge of cells in a centralized location by providing some unique features, like cell docking, in such a way that helps us realize one of the main goals of cell biology: the characterization of cells. 1:101:30

3) The Goals of Cellular Biology: Characterizing a Cells

The biggest overarching goal of cell biology is to learn how cells work. Since shape and function are related, we try and characterize the shape and function of cells in different states - helping us compare the difference between normal and diseased cells.

We've had some success doing this, but to properly characterize the 3D structures of a cell is incredibly difficult. Cells incredibly dynamic, and not only do they change naturally over time, but they can also be drastically affected by the factors such as:

  • location of the cell [*]
  • age of the cell
  • species of organism it belongs to.
  • age, size and sex (of the organism)
  • overall condition of the organism (health, diet, stress, sun deficiency, etc)
  • presence of disease(s)
  • time of day at death and/or imaging
  • specimen preparation process
  • and more

As you can see, that's a lot of factors already - each of which can drastically alter the behavior, shape and organelle makeup of our cell. Whenever we publish or upload images of a cell it's very important we record all this metadata. Most cell biologists can do a decent job of describing specimen preparation, but do a poor job with everything else. In particular, we neglect this very important point at the top. Location matters. 1:102:40

4) Location Matters

Each cell type can enormously vary with its position. Just as one example, a ganglion retinal cell here, near the optic nerve, is known to have different a different morphology and characteristics to one over here... hence it's very important to record, as precisely as possible, where each cell comes from and dock it using an appropriate coordinate system for the organ. What we want is to encourage or enforce the registration of this position, and part of that we also want to help developing a referencing system for all the organs which don't yet have a formal position system - which is almost of them! We also need to make sure this referencing system for organism parts scales relatively well with the size and development of the animal. 0:503:30

5) Proposed Docking Environment and Spatial System

What we propose here is a way to encourage biologists to publish their data in a 3D environment where the scaled images are docked into the relevant 3D anatomy model and given a realistic margin of error, since it's difficult to place precisely. In addition the usefulness of docking regions of cells, this environment could be extended to let us explore all the way from the level of organism, to organ, tissue, cell and organelle..... in some cases we could even dock a few molecules, but let's not get ahead of ourselves.

  • Is such a visualization environment cool? Yes
  • It is useful to biologists?..... No... not just yet.

Visualization environment can sometimes help as educational tools - such as teaching people about scale - but only when it's easy to extract new insight and quantitate information do they become useful to the biologists. To help encourage the use by biologists, the system should also include features to measure and compare the various compartments.

To that end, it should be possible not just to measure discrete distances, but the ability to click any of these compartments and to output volume and surface area. Taking this even further it would perhaps be possible to generate histograms of sizes, or in this case the length of all mitochondria in this cell. Already this is becoming a bit more useful for biologists wishing to quantify their cells, and if they want to do further analysis they can always download the model for a more specific desktop program.

But now the really tough questions. How do we know this cell is representative? What is real average size of mitochondria in this cell type?

At this level of high resolution there are very few detailed models of cells - 3D whole cell tomography has really only become possible in the last few years - but through old and high throughput averaging techniques like stereology, there is a great deal we do know about cells, and so we propose that this system combines the detailed 3D structure we can see in these models with what we already know about cells, in such a way that scientist can fill in the gaps - letting us know the average size and makeup of these different compartments in this cell type. The principle here is very similar to Wikipedia, where people collectively build a detailed picture.

Unlike Wikipedia however, Cell Databank should have very focussed goal of answering information about cellular composition of different cells. If then, a biologists ever has a question about the average size of a nucleus in a Nucleus, he can quickly obtain an answer and a link to the relevant literature.

And so we propose a system with two components, a webGL interface at the top for 3D visualization and a wikipedia like interface down the bottom.

While webGL doesn't yet work on all browsers, the advantage of webGL is its speed of rendering many polygons and allowing people to see 3D visualization without needing to commit to the download of an application. 3:106:40

6) Components of the Proposed System

The proposed interface you're about to see takes inspiration from several different existing systems. The first of these systems is the Google Body Browser, a system which uses webGL and is now open source. This system was chosen precisely because it's so easy to use, with the widget on the left making it easy to peel away layers of the body for effectively visualization... and the search bar at the top right used to instantly search and jump to any body part.

To improve on this however, we also want to be able to show image data, and allow measurement. And so you'll also notice elements such as the scale bar, and the familiar system here to switch between a vector representation and satellite image representation.

Of course instead of viewing streets, we want to peer deep into cells and organs such as the brain. As so we want to take from two systems developed by the National Center for Microscopy and Imaging Research (NCMIR) at the University of California, San Diego.

The first of these systems is the Whole Brain Catalog, developed by Steven Larson and his team at NCMIR. The whole brain catalog allows more sophisticated controls for hiding layers and showing cells, and also allowing the docking of cells - thus representing a more advanced mode of the webGL interface.

Finally, we have taken much inspiration and even wish to link into NuroLex, a MediaWiki system which has been used to help characterize different organelles and components within the brain. In our case however, we wish to expand this to all cell types, and in particular allow users to enter quantatiative information regarding what is know about each cell. 1:508:30


  • IMOD - naming objects - a page I wrote which talks about the importance of using ontologies and naming objects correctly.