import os
from caveclient import CAVEclient
from nglui import parser
= CAVEclient('minnie65_public')
client
= 5560000195854336
state_id = client.state.get_state_json(state_id)
state_json = parser.StateParser(state_json) state
NGLui: Neuroglancer States
This tutorial has recently been updated to materialization version 1412 (from 1300).
We have released a new public version 1412, as part of our quarterly release schedule. See details at Release Manifests: 1412.
Programmatic Interaction with Neuroglancer States
Visualizing data in Neuroglancer is one of the easiest ways to explore it in its full context. The python package nglui
was made to make it easy to generate Neuroglancer states from data, particularly pandas dataframes, in a progammatic manner. The package can be installed with pip install nglui
.
The nglui
package interacts prominantly with caveclient
and annotations queried from the database. See the section on querying the database to learn more.
Parsing Neuroglancer states
The nglui.parser
module offers a number of tools to get information about neuroglancer states out of the JSON format that neuroglancer uses. The recommended approach here is to pass a dictionary representation of the JSON object the StateParser class and build various kinds of dataframes from it.
The simplest way to parse the annotations in a Neuroglancer state is to first save the state using the ‘Share’ button, and then copy the state id (the last number in the URL). But you could also use the text you can download from the {} button in the viewer, and load this JSON into python.
In Neuroglancer, on the upper right, there is a ‘Share’ button.
When you click this link, you will be prompted for a google login. Use any google profile, but it is convenient to use the same as for this notebook.
This will upload the text of your neuroglancer state (the JSON format file, also accessed with the {} button) to a server, and returns a lookup number to access that state. The link with this state is automatically copied to your system’s clipboard.
When you paste it, you will see something like this:
https://neuroglancer.neuvue.io/?json_url=https://global.daf-apis.com/nglstate/api/v1/5560000195854336
for which the state id is 5560000195854336
You can then download the json and then use the annotation_dataframe
function to generate a comprehensive dataframe of all the annotations in the state.
You can now access different aspects of the state. For example, to get a list of all layers and their core info, you can use the layer_dataframe
method.
state.layer_dataframe()
layer | type | source | archived | |
---|---|---|---|---|
0 | img | image | precomputed://https://bossdb-open-data.s3.amaz... | False |
1 | seg | segmentation_with_graph | graphene://https://minnie.microns-daf.com/segm... | False |
2 | syns_in | annotation | None | False |
3 | syns_out | annotation | None | False |
This will give you a table with a row for each layer and columns for layer name, type, source, and whether the layer is archived (i.e. visible) or not.
With parser
you can get a list of all annotations with the annotation_dataframe method.
state.annotation_dataframe()
layer | anno_type | point | pointB | linked_segmentation | tags | group_id | description | |
---|---|---|---|---|---|---|---|---|
0 | syns_in | point | [294095, 196476, 24560] | NaN | [864691136333760691] | [] | None | None |
1 | syns_in | point | [294879, 196374, 24391] | NaN | [864691136333760691] | [] | None | None |
2 | syns_in | point | [300246, 200562, 24297] | NaN | [864691136333760691] | [] | None | None |
3 | syns_in | point | [300894, 201844, 24377] | NaN | [864691136333760691] | [] | None | None |
4 | syns_in | point | [294742, 199552, 23392] | NaN | [864691136333760691] | [] | None | None |
... | ... | ... | ... | ... | ... | ... | ... | ... |
5272 | syns_out | point | [277152, 200746, 22723] | NaN | [864691132294257136] | [] | None | None |
5273 | syns_out | point | [298884, 189782, 21453] | NaN | [864691132135519710] | [] | None | None |
5274 | syns_out | point | [330182, 198986, 23862] | NaN | [864691132100215248] | [] | None | None |
5275 | syns_out | point | [326552, 186446, 24792] | NaN | [864691131892380409] | [] | None | None |
5276 | syns_out | point | [275784, 206898, 21524] | NaN | [864691131593914919] | [] | None | None |
5277 rows × 8 columns
This will give you a dataframe where each row is an annotation, and columns show layer name
, point
locations, annotation type, annotation id, descriptive text, linked segmentations, tags, etc.
If you have multiple annotation layers, each layer is specified by the layer
column and all points across all layers are concatenated together.
The point coordinates are returned at the resolution of the neuroglancer view (default: 4x4x40 nm). You can change this resolution with argument point_resolution
which will rescale the points to the requested resolution.
The description
column populates with any text you have attached to the point.
Note that tags in the dataframe are stored as a list of integers, with each integer corresponding to one of the tags in the list.
To get the mapping between the tag index and the tag name for each layer, you can use the tag_dictionary
function.
='syns_out') parser.tag_dictionary(state_json, layer_name
{1: 'targets_spine', 2: 'targets_shaft', 3: 'targets_soma'}
Alternately if you are using tags, the expand_tags=True
argument will create a column for every tag and assign a boolean value to the row based on whether the tag is present in the annotation.
Another option that is sometimes useful is split_points=True
, which will create a separate column for each x, y, or z coordinate in the annotation.
=True, split_points=True, point_resolution=[1,1,1]) state.annotation_dataframe(expand_tags
layer | anno_type | linked_segmentation | tags | group_id | description | point_x | point_y | point_z | pointB_x | pointB_y | pointB_z | targets_spine | targets_shaft | targets_soma | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | syns_in | point | [864691136333760691] | [] | None | None | 73523.75 | 49119.0 | 614.000 | NaN | NaN | NaN | NaN | NaN | NaN |
1 | syns_in | point | [864691136333760691] | [] | None | None | 73719.75 | 49093.5 | 609.775 | NaN | NaN | NaN | NaN | NaN | NaN |
2 | syns_in | point | [864691136333760691] | [] | None | None | 75061.50 | 50140.5 | 607.425 | NaN | NaN | NaN | NaN | NaN | NaN |
3 | syns_in | point | [864691136333760691] | [] | None | None | 75223.50 | 50461.0 | 609.425 | NaN | NaN | NaN | NaN | NaN | NaN |
4 | syns_in | point | [864691136333760691] | [] | None | None | 73685.50 | 49888.0 | 584.800 | NaN | NaN | NaN | NaN | NaN | NaN |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
5272 | syns_out | point | [864691132294257136] | [] | None | None | 69288.00 | 50186.5 | 568.075 | NaN | NaN | NaN | False | False | False |
5273 | syns_out | point | [864691132135519710] | [] | None | None | 74721.00 | 47445.5 | 536.325 | NaN | NaN | NaN | False | False | False |
5274 | syns_out | point | [864691132100215248] | [] | None | None | 82545.50 | 49746.5 | 596.550 | NaN | NaN | NaN | False | False | False |
5275 | syns_out | point | [864691131892380409] | [] | None | None | 81638.00 | 46611.5 | 619.800 | NaN | NaN | NaN | False | False | False |
5276 | syns_out | point | [864691131593914919] | [] | None | None | 68946.00 | 51724.5 | 538.100 | NaN | NaN | NaN | False | False | False |
5277 rows × 15 columns
Generating Neuroglancer States from Data
The nglui.statebuilder
package is used to build Neuroglancer states that express arbitrary data.
The Site Configuration options determine the default configurations for your StateBuilder
objects. We will set this to spelunker
, and set the materialization version to 661
for reproducibility.
from nglui import statebuilder
='spelunker')
statebuilder.site_utils.set_default_config(target_site
=1412 client.version
The general pattern is that one makes a “StateBuilder” object that has rules for how to build a Neuroglancer state layer by layer, including selecting certain neurons, and populate layers of annotations.
You then pass a DataFrame to the StateBuilder, and the rules tell it how to render the DataFrame into a Neuroglancer link. The same set of rules can be used on similar dataframes but with different data, such as synapses from different neurons.
To understand the detailed use of the package, please see the tutorial.
However, a number of basic helper functions allow nglui
to be used for common functions in just a few lines.
For example, to generate a Neuroglancer state that shows a neuron and its synaptic inputs and outputs, we can use the make_neuron_neuroglancer_link
helper function.
from nglui.statebuilder import helpers
statebuilder.helpers.make_neuron_neuroglancer_link(
client,864691135441799752,
=True,
show_inputs=True,
show_outputs='html'
return_as )
The main helper functions are:
make_neuron_neuroglancer_link
- Shows one or more neurons and, optionally, synaptic inputs and/or outputs.make_synapse_neuroglancer_link
- Using a pre-downloaded synapse table, make a link that shows the synapse and the listed synaptic partners.make_point_statebuilder
- Generate a statebuilder to map a dataframe containing points (by default, formatted like a cell types table) to a Neuroglancer link.
In all cases, please look at the docstrings for more information on how to use the functions.
Uploading local annotations to neuroglancer
NGLUI also has many functions for turning data into neuroglancer states. As a toy example, lets reupload the points from the parser
example.
Here we generate the state for seunglab
version. See the NGLUI documentation on site configuration for more options.
= state.annotation_dataframe()[['point','description','tags','linked_segmentation']]
data_df 3) data_df.head(
point | description | tags | linked_segmentation | |
---|---|---|---|---|
0 | [294095, 196476, 24560] | None | [] | [864691136333760691] |
1 | [294879, 196374, 24391] | None | [] | [864691136333760691] |
2 | [300246, 200562, 24297] | None | [] | [864691136333760691] |
from nglui.statebuilder import *
# Set sensible defaults for your link generation
='seunglab')
site_utils.set_default_config(target_site
= helpers.from_client(client)
img, seg
# Make a basic imagery source layer
= ImageLayerConfig(img.source)
img_layer
# Make a basic segmentation source layer
= SegmentationLayerConfig(seg.source)
seg_layer
# # Alternately: set v1300 flat as source
# seg_layer = SegmentationLayerConfig('precomputed://gs://iarpa_microns/minnie/minnie65/seg_m1300')
# Create the mapping between your dataframe and the annotation layer
= PointMapper(point_column='point',
points ='description',
description_column='linked_segmentation',
linked_segmentation_column='tags',
tag_column
)= AnnotationLayerConfig(name='new-annotations',
anno_layer ='seg',
linked_segmentation_layer=points,
mapping_rules= ['targets_spine', 'targets_shaft', 'targets_soma']
tags
)
# Stack the layers together, and render
= StateBuilder([img_layer, seg_layer, anno_layer], client=client)
sb 10), return_as='short') sb.render_state(data_df.head(
'https://neuromancer-seung-import.appspot.com/?json_url=https://global.daf-apis.com/nglstate/api/v1/5069482274848768'
We built up the StateBuilder
with different layers, including imagery, segmentation, and annotations.
PointMapper
determines how data is interpretted in the annotation layer, including which columns in a dataframe to use
AnnotationLayerConfig
controls how the points will appear, including the name of the layer as it appears in neuroglancer, which segmentation layer is activated under the points, and the list of active ‘tags’.
The StateBuilder is rendered in the final line, taking only the first 10 elements of the dataframe and returned a ‘shortened’ link. Other options include: return_as='html'
for a hyperlink, return_as='url'
for a long link, return_as='dict'
for an editable text version of the JSON state.