Giotto Viewer: Setting Up a Dataset
The instructions are provided below for various platforms.
1) 10X Genomics Visium datasets
2) SeqFISH/SeqFISH+/MerFISH
3) Slide-seq
Giotto can display images (fluorescent staining), cell segmentation information, in addition to basic things such as gene expression, and cell annotations. It is an interactive viewer.
0. Before starting
Launch a Docker command-line terminal (or Windows Terminal if you have Ubuntu WSL2, or Terminal App in macOS, or Terminal in Ubuntu if you have native installation).Make sure the versions of giotto-viewer and smfish-image-processing are the latest:
# For Docker:
$ sudo pip3 install --upgrade --force-reinstall --no-cache-dir --no-deps giotto-viewer smfish-image-processing
[sudo] password for dean: (enter dean)
# For others and user-based install:
# pip3 install --user --upgrade --force-reinstall --no-cache-dir --no-deps giotto-viewer smfish-image-processing
- If you see "Requirements satisfied...", this indicates that the new versions are not installed. Try running same command again.
- You should see "Successfully uninstalled...", followed by "Successfully installed ..." for each smfish_image_processing and giotto-viewer packages.
- The newest versions are at the moment: giotto-viewer-1.0.8, smfish-image-processing-1.4.3.
$ sudo rm -rf /tmp/pos*
1. Simple version (no image)
Read 0. Before starting.If there are no images or cell segmentations to display, the directions are simple. Assuming you are coming from dataset examples, and have run the exportGiottoViewer step as below:
> viewer_folder = '/home/qzhu/Mouse_cortex_viewer/'
# coming from seqFISH+ dataset
# select annotations, reductions and expression values to view in Giotto Viewer
> exportGiottoViewer(gobject = VC_test, output_directory = viewer_folder, annotations = c('cell_types', 'kmeans', 'global_cell_types', 'sub_cell_types', 'HMRF_k9_b.30'), dim_reductions = c('tsne', 'umap'), dim_reduction_names = c('tsne', 'umap'), expression_values = 'scaled', expression_rounding = 3, overwrite_dir = T)
To save you time, we have already performed this step, and saved the content. Download it Mouse_cortex_viewer.zip.
See content
-rw-r--r-- 1 qzhu qzhu 18519 Mar 13 2020 umap_umap_dim_coord.txt
-rw-r--r-- 1 qzhu qzhu 8834 Mar 13 2020 total_expr_num_annot_information.txt
-rw-r--r-- 1 qzhu qzhu 1254 Mar 13 2020 sub_leiden_clus_select_annot_information.txt
-rw-r--r-- 1 qzhu qzhu 75 Mar 13 2020 sub_leiden_clus_select_annot_information.annot
-rw-r--r-- 1 qzhu qzhu 92 Mar 13 2020 offset_file.txt
-rw-r--r-- 1 qzhu qzhu 1046 Mar 13 2020 leiden_clus_annot_information.txt
-rw-r--r-- 1 qzhu qzhu 32 Mar 13 2020 leiden_clus_annot_information.annot
-rw-r--r-- 1 qzhu qzhu 1046 Mar 13 2020 HMRF_2_k9_b.32_annot_information.txt
-rw-r--r-- 1 qzhu qzhu 36 Mar 13 2020 HMRF_2_k9_b.32_annot_information.annot
-rw-r--r-- 1 qzhu qzhu 63376 Mar 13 2020 giotto_gene_ids.txt
-rw-r--r-- 1 qzhu qzhu 4599 Mar 13 2020 giotto_cell_ids.txt
-rw-r--r-- 1 qzhu qzhu 8170 Mar 13 2020 centroid_locations.txt
-rw-r--r-- 1 qzhu qzhu 1254 Mar 13 2020 cell_types_annot_information.txt
-rw-r--r-- 1 qzhu qzhu 149 Mar 13 2020 cell_types_annot_information.annot
-rw-r--r-- 1 qzhu qzhu 48 Mar 13 2020 annotation_num_list.txt
-rw-r--r-- 1 qzhu qzhu 210 Mar 13 2020 annotation_list.txt
-rw-r--r-- 1 qzhu qzhu 34455952 Mar 13 2020 giotto_expression.csv
Note. If you are using Docker, download Mouse_cortex_viewer.zip to the host directory that is shared with Docker. Then in docker, extract it:
dean@e527827b2943:/$ cd /data
dean@e527827b2943:/$ unzip Mouse_cortex_viewer.zip
dean@e527827b2943:/$ cd Mouse_cortex_viewer
If not using Docker (i.e. installed Viewer natively), you can download the file to anywhere that is accessible. Extract Mouse_cortex_viewer.zip as normal.
$ unzip Mouse_cortex_viewer.zip
$ cd Mouse_cortex_viewer
In Docker command-line terminal (or Windows Terminal if you have Ubuntu WSL2, or Terminal App in macOS, or Terminal in Ubuntu if you have native installation), type the following commands:
#create step1 json template
$ giotto_setup_image --require-stitch=n --image=n --image-multi-channel=n --segmentation=n --multi-fov=n --output-json=step1.json
- If you get giotto_setup_image command is not found error, the binary could be installed to
~/.local/bin
rather than/usr/local/bin
. So you can add~/.local/bin
to$PATH
variable, or specify path explicitly when you run. E.g.~/.local/bin/giotto_setup_image --require-stitch=n --image=n ...
giotto_expression.csv
exists.
$ vim step1.json
#press i for insert mode. Move your cursor to expression line, and change the expression file name to what exportGiottoViewer generates in /home/qzhu/Mouse_cortex_viewer, most likely it will be giotto_expression.csv.
#press ESC to exit insert mode. Then :wq to exit vim.
Do the actions listed in the step1.json:
$ smfish_step1_setup -c step1.json
Then create viewing panels. By default, 2 side-by-side panels will be created (one for physical view, one for dimensional reduction). You will notice step2.json has been created.
$ giotto_setup_viewer --num-panel=2 --input-preprocess-json=step1.json --panel-1=PanelPhysicalSimple --panel-2=PanelTsne --output-json=step2.json --input-annotation-list=annotation_list.txt
Do the actions in step2.json:
$ smfish_read_config -c step2.json -o test.dec6.js -p test.dec6.html -q test.dec6.css
$ giotto_copy_js_css --output .
$ python3 -m http.server
Open your browser, navigate to http://localhost:8000/. Then click on the file "test.dec6.html" to see the viewer. We recommend opening the viewer in incognito mode/private browsing mode to clear the cache.
That's it.
1.1 Questions and customizations
1.1.1 I want to use Giotto Viewer independently of analyzer. What can I do to mimic the step of exportGiottoViewer in analyzer?
Please see this page about the format of annotations required by the viewer. You can also see this link for an example of files exported by exportGiottoViewer to help you get the input files in the right format.1.1.2 Do I need to run all the above steps every time?
No. If only the cell annotations have changed, but expression matrix has not changed, you can start from giotto_setup_viewer step, and do all subsequent steps. As you display the viewer, be sure to clear the browser cache, or run Viewer in incognito mode (Chrome) or private browsing mode (Firefox).1.1.3 If I update the annotations, do I need to run all the steps again?
See 2.1.1.4 How to customize viewer to more than 2 panels?
You can modify this line in giotto_setup_viewer:--num-panel=2 --input-preprocess-json=step1.json --panel-1=PanelPhysicalSimple --panel-2=PanelTsne
. Set --num-panel
to a multiple of 2, then specify --panel-1
, --panel-2
, --panel-3
, --panel-4
, to these panel choices: PanelPhysical
, PanelTsne
, PanelPhysicalSimple
, PanelPhysical10X
. Note: PanelPhysical
, PanelPhysical10X
do require images. PanelPhysicalSimple
, PanelTsne
do not. PanelPhysical
additionally require cell segmentation.
For 4-panel case, we can specify how we want the panels to be linked. Open the step2.json that is generated in previous step by giotto_setup_viewer. At the end of the JSON, check these lines are present. If not add them.
"interact_1": ["map_1", "map_2", "map_3", "map_4"],
"sync_1": ["map_1", "map_3"],
"sync_2": ["map_2", "map_4"]"
2. Advanced version
2.1 With images, but no cell segmentation (e.g. Visium)
Read 0. Before starting.This case arises with 10X Genomics Visium platform. 10X visium has made all the files standardized with standard file names and they are relatively easy to setup. Visium provides also the underlying staining image for visualization. It is incorporated into the viewer.
Download the demo package (mouse_visium_brain_viewer.zip), the mouse coronal dataset from 10X Visium result from exportGiottoViewer() function. Also get the raw image here (V1_Adult_Mouse_Brain_image.tif). Note the image V1_Adult_Mouse_Brain_image.tif should be the RAW or full size images, not the reduced size. It should be 300-500 MB.
Note. If you are using Docker, download mouse_visium_brain_viewer.zip and the V1_Adult_Mouse_Brain_image.tif to the host directory that is shared with Docker. Then in docker, extract it:
dean@e527827b2943:/$ cd /data
dean@e527827b2943:/$ unzip mouse_visium_brain_viewer.zip
dean@e527827b2943:/$ cp V1_Adult_Mouse_Brain_image.tif mouse_visium_brain_viewer/.
dean@e527827b2943:/$ cd mouse_visium_brain_viewer
If not using Docker (i.e. installed Viewer natively), you can download the file to anywhere that is accessible. Extract mouse_visium_brain_viewer.zip as normal.
$ unzip mouse_visium_brain_viewer.zip
$ cp V1_Adult_Mouse_Brain_image.tif mouse_visium_brain_viewer/.
$ cd mouse_visium_brain_viewer
Next, do the following:
$ giotto_setup_image --require-stitch=n --image=y --image-multi-channel=n --segmentation=n --multi-fov=n --output-json=step1.json
#automatically fill in image dimension in the step1 json file
$ giotto_step1_modify_json --add-image V1_Adult_Mouse_Brain_image.tif --input step1.json --output step1.json
#do the step1 actions
$ smfish_step1_setup -c step1.json
- You may encounter an error during smfish_step1_step, "cache resources exhausted `pos.0.png' @ error/cache.c/OpenPixelCache/3984 (Magick::ImageMagickError)". If this happens to you, then modify the /etc/ImageMagick-6/policy.xml file as root to increase limits:
$ sudo vim /etc/ImageMagick-6/policy.xml [sudo] password for dean: #enter dean as password #once in vim, press i for insert mode, go to line ~58: <policy domain="resource" name="disk" value="1GiB"/>. #change 1GiB to 8GiB. #press ESC, then :wq to save and exit vim.
Next set up panels. We will use 2-panel configuration for illustration, but 4-panel configuration setup is similar.
$ giotto_setup_viewer --num-panel=2 --input-preprocess-json=step1.json --panel-1=PanelPhysical10X --panel-2=PanelTsne --output-json=step2.json --input-annotation-list=annotation_list.txt
Note the panel type for physical panel is PanelPhysical10X. PanelTsne stays the same.
Final steps:
$ smfish_read_config -c step2.json -o test.dec6.js -p test.dec6.html -q test.dec6.css
$ giotto_copy_js_css --output .
$ python3 -m http.server
Open your browser, navigate to http://localhost:8000/. Then click on the file "test.dec6.html" to see the viewer. That's it.
2.2 With images, with cell segmentation, and multiple FOVs (e.g. seqFISH(+), MerFISH)
Read 0. Before starting.These data have single-cellular resolution. In addition, they have rich staining images, cell segmentations that can be ovelayed.
If you have multiple fields of view (FOVs), which is likely, the images should be given as per FOV, and unstitched. Cell segmentations (ROI zips) should also be unstitched. Cell coordinates should be unstitched as well. In other words, files should be organized as per FOV.
The below steps will do the stitching as part of processing.
Download the demo images to follow along. CORTEX (cortex.tar.gz). (10K genes, 500 cells, 5 FOVs) See content
-rw-r--r-- 1 zqian gcproj 67215245 Jun 28 16:27 segmentation_staining_1_MMStack_Pos0.ome.tif
-rw-r--r-- 1 zqian gcproj 67195681 Jun 28 16:27 segmentation_staining_1_MMStack_Pos1.ome.tif
-rw-r--r-- 1 zqian gcproj 67195681 Jun 28 16:27 segmentation_staining_1_MMStack_Pos2.ome.tif
-rw-r--r-- 1 zqian gcproj 67195681 Jun 28 16:27 segmentation_staining_1_MMStack_Pos3.ome.tif
-rw-r--r-- 1 zqian gcproj 67195689 Jun 28 16:27 segmentation_staining_1_MMStack_Pos4.ome.tif
-rw-r--r-- 1 zqian gcproj 102463 Jun 28 16:27 RoiSet_Pos0_real.zip
-rw-r--r-- 1 zqian gcproj 90112 Jun 28 16:27 RoiSet_Pos1_real.zip
-rw-r--r-- 1 zqian gcproj 72043 Jun 28 16:27 RoiSet_Pos2_real.zip
-rw-r--r-- 1 zqian gcproj 78095 Jun 28 16:27 RoiSet_Pos3_real.zip
-rw-r--r-- 1 zqian gcproj 73859 Jun 28 16:27 RoiSet_Pos4_real.zip
-rw-r--r-- 1 zqian gcproj 29738555 Jun 28 16:27 cortex_expression_zscore.csv
-rwxr-xr-x 1 zqian gcproj 17234 Jun 28 16:27 Cell_centroids.csv
-rw-r--r-- 1 zqian gcproj 134 Jun 28 16:28 offset.txt
-rw-r--r-- 1 zqian gcproj 2305 Jun 28 16:46 v_cortex_setup.json
-rw-r--r-- 1 zqian gcproj 36 Dec 6 2019 kmeans_annot_information.annot
-rw-r--r-- 1 zqian gcproj 122 Dec 6 2019 cell_types_annot_information.annot
-rw-r--r-- 1 zqian gcproj 18410 Dec 6 2019 umap_umap_dim_coord.txt
-rw-r--r-- 1 zqian gcproj 18279 Dec 6 2019 tsne_tsne_dim_coord.txt
-rw-r--r-- 1 zqian gcproj 1046 Dec 6 2019 kmeans_annot_information.txt
-rw-r--r-- 1 zqian gcproj 8170 Dec 6 2019 centroid_locations.txt
-rw-r--r-- 1 zqian gcproj 1046 Dec 6 2019 cell_types_annot_information.txt
-rw-r--r-- 1 zqian gcproj 80 Dec 18 19:02 annotation_list.txt
Note. If you are using Docker, download cortex.tar.gz to the host directory that is shared with Docker. Then in docker, extract it:
dean@e527827b2943:/$ cd /data
dean@e527827b2943:/data$ tar -zxf cortex.tar.gz
dean@e527827b2943:/data$ cd cortex
If not using Docker (i.e. installed Viewer natively), you can download the file to anywhere that is accessible. Extract cortex.tar.gz as normal.
$ tar -zxf cortex.tar.gz
$ cd cortex
Create a step1 json template:
$ giotto_setup_image --require-stitch=y --image=y --image-multi-channel=y --segmentation=y --multi-fov=y --output-json=step1.json --rotate-image=y
This outputs a template JSON file to instruct about the setup process. The output JSON file will then have sections that it thinks necessary to complete the setup process. Therefore, think of this JSON as the setting file. For example, "--image=y" means that we will have images; "--segmentation=y" means that there will segmentation info. For example, take a look at the resulting JSON file example.
Output JSON
Generated step1.json template:
{
"tiff_width": 4028,
"tiff_height": 4028,
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 1, 2, 3, 4],
"offset": "GENERIC_offset.txt",
"new_task_1": {
"task": "decouple_tiff",
"priority": 1,
"input": "GENERIC_[POSITION].tif",
"output_prefix": "pos[POSITION]",
"positions": [0, 1, 2, 3, 4]
},
"new_task_2": {
"task": "extract_roi_zip",
"priority": 2,
"input": "GENERIC_Roi_Pos[POSITION]_real.zip",
"output": "roi/roi.pos[POSITION].all.txt",
"tmp": "/tmp/pos[POSITION]",
"positions": [0, 1, 2, 3, 4]
},
"new_task_3": {
"task": "stitch_image",
"priority": 3,
"input": "pos[POSITION].[STAINID].tif",
"output": "pos[STAINID].joined.tif",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 1, 2, 3, 4]
},
"new_task_4": {
"task": "stitch_coord",
"priority": 4,
"input": "GENERIC_centroids.csv",
"output": "cell.centroid.stitched.pos.all.cells.txt",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4]
},
"new_task_5": {
"task": "stitch_segmentation_roi",
"priority": 5,
"input": "roi/roi.pos[POSITION].all.txt",
"output": "roi.stitched.pos.all.cells.txt",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4]
},
"new_task_6": {
"task": "align_segmentation_and_cell_centroid",
"priority": 6,
"input_cell_centroid": "cell.centroid.stitched.pos.all.cells.txt",
"input_segmentation": "roi.stitched.pos.all.cells.txt",
"output": "segmentation.to.cell.centroid.map.txt"
},
"new_task_7": {
"task": "tiling_image",
"priority": 7,
"input": "Pos.ch[STAINID].joined.tif",
"output_dir": "tiles.[STAINID]",
"zoom": 6,
"stain_ids": [0, 1, 2, 3, 4]
},
"new_task_8": {
"task": "prepare_gene_expression",
"priority": 8,
"input": "giotto_expression.csv",
"output_dir": "all.genes",
"csv_sep": ",",
"csv_header": 0,
"csv_index_col": 0,
"num_genes_per_file": 100
}
}
Next, modify the generated JSON file by using giotto_step1_modify_json:
$ giotto_step1_modify_json --input step1.json --add-image "segmentation_staining_1_MMStack_Pos[POSITION].ome.tif" --output step1.json
$ giotto_step1_modify_json --input step1.json --change-positions 0 1 2 3 4 --output step1.json
$ giotto_step1_modify_json --input step1.json --change-stain-ids 0 4 7 --output step1.json
$ giotto_step1_modify_json --input step1.json --change-offset offset.txt --output step1.json
$ giotto_step1_modify_json --input step1.json --change-segmentation "RoiSet_Pos[POSITION]_real.zip" --output step1.json
$ giotto_step1_modify_json --input step1.json --change-expression cortex_expression_zscore.csv --output step1.json
Giotto_step1_modify_json's job is to modify a section in the JSON file (step1.json) without having user to manually write this file.
Note:
- --add-image can accept multiple images. The wild character [POSITION] can be used as a placeholder for position IDs.
- --change-segmentation can also accept segmentations for multiple images. The RoiSet_Pos[POSITION]_real.zip for example.
- --change-stain-ids: Stain refers to Nissl, DAPI, polyA, etc. Since each tiff is a multi-image stack, stain-id refers to the ID within each TIFF that refers to Nissl, DAPI, or polyA
{
"tiff_width": 2048,
"tiff_height": 2048,
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 4, 7],
"new_task_1": {
"task": "decouple_tiff",
"priority": 1,
"input": "./segmentation_staining_1_MMStack_Pos[POSITION].ome.tif",
"output_prefix": "pos[POSITION]",
"positions": [0, 1, 2, 3, 4]
},
"new_task_2": {
"task": "extract_roi_zip",
"priority": 2,
"input": "RoiSet_Pos[POSITION]_real.zip",
"output": "roi/roi.pos[POSITION].all.txt",
"tmp": "/tmp/pos[POSITION]",
"positions": [0, 1, 2, 3, 4]
},
"new_task_3": {
"task": "rotate_image",
"priority": 3,
"input": "pos[POSITION].[STAINID].tif",
"output": "pos[POSITION].[STAINID].rotate.tif",
"angle": "left90",
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 4, 7]
},
"new_task_4": {
"task": "rotate_coord",
"priority": 4,
"input": "Cell_centroids.csv",
"output": "Cell_centroids.rotate.csv",
"angle": "left90",
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 4, 7]
},
"new_task_5": {
"task": "rotate_segmentation_roi",
"priority": 5,
"input": "roi/roi.pos[POSITION].all.txt",
"output": "roi/roi.pos[POSITION].all.rotate.txt",
"angle": "left90",
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 4, 7]
},
"new_task_6": {
"task": "stitch_image",
"priority": 6,
"input": "pos[POSITION].[STAINID].rotate.tif",
"output": "pos[STAINID].joined.tif",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4],
"stain_ids": [0, 4, 7]
},
"new_task_7": {
"task": "stitch_coord",
"priority": 7,
"input": "Cell_centroids.rotate.csv",
"output": "cell.centroid.stitched.pos.all.cells.txt",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4]
},
"new_task_8": {
"task": "stitch_segmentation_roi",
"priority": 8,
"input": "roi/roi.pos[POSITION].all.rotate.txt",
"output": "roi.stitched.pos.all.cells.txt",
"offset": "offset.txt",
"positions": [0, 1, 2, 3, 4]
},
"new_task_9": {
"task": "align_segmentation_and_cell_centroid",
"priority": 9,
"input_cell_centroid": "cell.centroid.stitched.pos.all.cells.txt",
"input_segmentation": "roi.stitched.pos.all.cells.txt",
"output": "segmentation.to.cell.centroid.map.txt"
},
"new_task_10": {
"task": "tiling_image",
"priority": 10,
"input": "pos[STAINID].joined.tif",
"output_dir": "tiles.[STAINID]",
"zoom": 6,
"stain_ids": [0, 4, 7]
},
"new_task_11": {
"task": "prepare_gene_expression",
"priority": 11,
"input": "cortex_expression_zscore.csv",
"output_dir": "all.genes",
"csv_sep": ",",
"csv_header": 0,
"csv_index_col": 0,
"num_genes_per_file": 100
}
}
Once we have finished modifying, we can then use the step1.json to run the actual tasks.
$ smfish_step1_setup -c step1.json
Choose one of two cases below:
2.2.1: Case 1: Setting up two-panel configuration
$ giotto_setup_viewer --num-panel=2 --input-preprocess-json=step1.json --panel-1=PanelPhysical --panel-2=PanelTsne --output-json=step2.json --input-annotation-list=annotation_list.txt
The annotation_list.txt is generated in previous step by exportGiotto function. This viewer setup will put two panels in horizontal configuration.
2.2.2: Case 2: Setting up four-panel configuration
$ giotto_setup_viewer --num-panel=4 --input-preprocess-json=step1.json --panel-1=PanelPhysical --panel-2=PanelTsne --panel-3=PanelPhysical --panel-4=PanelTsne --output-json=step2.json --input-annotation-list=annotation_list.txt
Open the generated step2.json file and make sure it looks similar to this.
Output JSON
{
"num_panel": 2,
"orientation": "horizontal",
"annotation_set": {
"num_annot": 2,
"annot_1": {
"file": "kmeans_annot_information.txt",
"name": "kmeans",
"mode": "discrete"
},
"annot_2": {
"file": "cell_types_annot_information.txt",
"name": "cell_types",
"mode": "discrete"
}
},
"map_1": {
"type": "PanelPhysical",
"maxBound": 2048,
"id": 1,
"annot": "kmeans",
"gene_list": "giotto_gene_ids.txt",
"tile": "nissl",
"dir_polyA": "tiles.0",
"dir_nissl": "tiles.4",
"dir_dapi": "tiles.7",
"gene_map": "all.genes/gene.map",
"segmentation": "roi.stitched.pos.all.cells.txt",
"segmentation_map": "segmentation.to.cell.centroid.map.txt",
"dir_gene_expression": "all.genes",
"map_height": "1000px"
},
"map_2": {
"type": "PanelTsne",
"id": 2,
"maxBound": 500,
"file_tsne": "umap_umap_dim_coord.txt",
"annot": "kmeans",
"gene_list": "giotto_gene_ids.txt",
"map_height": "1000px",
"gene_map": "all.genes/gene.map",
"dir_gene_expression": "all.genes"
},
"interact_1": ["map_1", "map_2"]
}
Final steps:
$ smfish_read_config -c step2.json -o test.dec6.js -p test.dec6.html -q test.dec6.css
$ giotto_copy_js_css --output .
$ python3 -m http.server
Open your browser, navigate to http://localhost:8000/. Then click on the file "test.dec6.html" to see the viewer. That's it.
2.2.3. Will this setup instruction accept multiple images? Does it handle stitching of images/coordinates?
Yes to all of these questions! In giotto_setup_image above, --require-stitch=y --image=y --multi-fov=y, are created exactly for this purpose. With this stitching template, you need to specify the offset.txt file that defines how you want the images to be stitched to each other: Offset filePos0.x Pos0.y 0 0
Pos1.x Pos1.y 1654.97 0
Pos2.x Pos2.y Pos1.x+1750.75 0
Pos3.x Pos3.y Pos2.x+1674.35 0
Pos4.x Pos4.y Pos3.x+675.5 1438.02
Then, in giotto_step1_modify_json, remember to specify offset: --change-offset offset.txt.
Not only are images stitched, but also the coordinates of the cells, the segmentations of the cells are also stitched according to the same offset file. (see sections "stitch_coord", "stitch_segmentation_roi" in step1.json file)
As such, the input cell coordinates, segmentations should be given unstitched.
3. Other resources
3.1 Manual preparation of Giotto Viewer inputs without exportGiottoViewer()
Giotto viewer can work without analyzer part. For example, if you have your own spatial expression matrix and UMAP you want to visualize with viewer. Prepare the input files manually as required. Follow this tutorial: Setting up Giotto Viewer inputs.
3.2 Exporting cell selections for Giotto Analyzer reanalysis
Suppose a user makes two cell selections using the Giotto Viewer, saves each selection separately to a file (/tmp/selection.1.txt and /tmp/selection.2.txt). The following shows how to perform differential expression analysis involving these two selections of cells in Giotto Analyzer, demonstrating iterative analysis.
#load existing cell annotations
cell_metadata=pDataDT(VC_test)
annot = 1:nrow(cell_metadata)
annot[1:length(annot)] = 0
#read selected cell indices (1)
yy<-read.table("/tmp/selection.1.txt", header=F, row.names=NULL)
annot[t(t(yy))] = 1
#read selected cell indices (2)
yy<-read.table("/tmp/selection.2.txt", header=F, row.names=NULL)
annot[t(t(yy))] = 2
#create a new column in cell metadata
cell_metadata=base::cbind(cell_metadata, annot)
VC_test@cell_metadata = cell_metadata
#do one-vs-all comparison for selected cells
markers = findMarkers_one_vs_all(gobject = VC_test, expression_values = 'normalized', cluster_column = 'annot', method = 'scran', pval = 0.01, logFC = 0.5)
#do a pairwise comparison between two cell selections
markers = findMarkers(gobject = VC_test, expression_values = 'normalized', method = 'scran', cluster_column = 'annot', group_1 = 1, group_2 = 2)
3.1 Manual preparation of Giotto Viewer inputs without exportGiottoViewer()
#load existing cell annotations
cell_metadata=pDataDT(VC_test)
annot = 1:nrow(cell_metadata)
annot[1:length(annot)] = 0
#read selected cell indices (1)
yy<-read.table("/tmp/selection.1.txt", header=F, row.names=NULL)
annot[t(t(yy))] = 1
#read selected cell indices (2)
yy<-read.table("/tmp/selection.2.txt", header=F, row.names=NULL)
annot[t(t(yy))] = 2
#create a new column in cell metadata
cell_metadata=base::cbind(cell_metadata, annot)
VC_test@cell_metadata = cell_metadata
#do one-vs-all comparison for selected cells
markers = findMarkers_one_vs_all(gobject = VC_test, expression_values = 'normalized', cluster_column = 'annot', method = 'scran', pval = 0.01, logFC = 0.5)
#do a pairwise comparison between two cell selections
markers = findMarkers(gobject = VC_test, expression_values = 'normalized', method = 'scran', cluster_column = 'annot', group_1 = 1, group_2 = 2)