hello,大家好,今天我们来回顾一下,不讲新的东西,回顾一下scanpy做10X单细胞(10X空间转录组)多样本整合的方法,这个方法跟Seurat差别比较大,目前引用scanpy做多样本整合的文献还比较少,原因么,可能是seurat太过权威了吧,但是个人觉得还是很有智慧的,今天我们来回顾一下这个过程,探讨一下这里面需要注意的点。
首先来看看简介,The ingest function assumes an annotated reference dataset that captures the biological variability of interest.(这一句话就很重要,首先需要一个注释好的参考数据集,然后来“捕获”疾病样本的生物学变化) 。The rational(理论) is to fit a model on the reference data and use it to project new data(用参考数据集拟合一个model,从而来插入新的数据集). For the time being, this model is a PCA combined with a neighbor lookup search tree, for which we use UMAP’s implementation
这块儿需要注意了,做整合分析的时候,scanpy需要我们有一个注释好的data作为reference,从理性上讲应该这样,但很多时候我们用Seurat做整合用不到这个,个人欣赏scanpy的做法,更为理性。
一些特点
- As ingest is simple and the procedure clear, the workflow is transparent and fast.
- Like BBKNN, ingest leaves the data matrix itself invariant.
- Unlike BBKNN, ingest solves the label mapping problem (like scmap) and maintains an embedding that might have desired properties like specific clusters or trajectories.
我们来看看scanpy给出的官方示例
We refer to this asymmetric dataset integration as ingesting annotations from an annotated reference adata_ref
into an adata
that still lacks this annotation(跟上面介绍的一致). It is different from learning a joint representation that integrates datasets in a symmetric way as BBKNN, Scanorma, Conos, CCA(CCA是Seurat做整合常用的方法) (e.g. in Seurat) or a conditional VAE (e.g. in scVI, trVAE) would do, but comparable to the initiall MNN implementation in scran(scran这个软件目前用的也比较少,更多的是需要这个软件其中一部分的功能)。不过scanpy做样本整合的思路和方法确实有别于其他的软件。
加载模块和示例数据
import scanpy as sc
import pandas as pd
import seaborn as sns
adata_ref = sc.datasets.pbmc3k_processed() # this is an earlier version of the dataset from the pbmc3k tutorial
adata = sc.datasets.pbmc68k_reduced()
第一个值得注意的地方
var_names = adata_ref.var_names.intersection(adata.var_names)
adata_ref = adata_ref[:, var_names]
adata = adata[:, var_names]
这个地方需要一点小心,首先高变基因是adata的,但是对于ref数据集并不再计算高变基因,而是直接用adata的高变基因与ref的所有基因取交集,然后对两个数据集进行切割,保留分析用到的数据,这个地方需要注意,因为涉及到很多的问题,比如高变基因的数量,挑选阈值等等,个人也欣赏这个方法,可以在降维聚类的过程中体现出正常和疾病数据集的差异。
对ref进行降维聚类,疾病样本的高变基因提取ref数据集,然后对数据集进行降维聚类,
sc.pp.pca(adata_ref)
sc.pp.neighbors(adata_ref)
sc.tl.umap(adata_ref)
sc.pl.umap(adata_ref, color='louvain')
图片不是ref本身的高变基因进行降维聚类的结果,因为不同细胞类型之间有混合,这样的目的就是在与整合疾病数据集。
转换标签
Let’s map labels and embeddings from adata_ref to adata based on a chosen representation. Here, we use adata_ref.obsm['X_pca'] to map cluster labels and the UMAP coordinates.
sc.tl.ingest(adata, adata_ref, obs='louvain') #现在更多是leiden了。
adata.uns['louvain_colors'] = adata_ref.uns['louvain_colors'] # fix colors
sc.pl.umap(adata, color=['louvain', 'bulk_labels'], wspace=0.5)
进行数据整合
adata_concat = adata_ref.concatenate(adata, batch_categories=['ref', 'new'])
adata_concat.obs.louvain = adata_concat.obs.louvain.astype('category')
adata_concat.obs.louvain.cat.reorder_categories(adata_ref.obs.louvain.cat.categories, inplace=True) # fix category ordering
adata_concat.uns['louvain_colors'] = adata_ref.uns['louvain_colors'] # fix category colors
sc.pl.umap(adata_concat, color=['batch', 'louvain'])
While there seems to be some batch-effect in the monocytes and dendritic cell clusters, the new data is otherwise mapped relatively homogeneously.存在一定的批次效应。
The megakaryoctes are only present in adata_ref and no cells from adata map onto them. If interchanging reference data and query data, Megakaryocytes do not appear as a separate cluster anymore. This is an extreme case as the reference data is very small; but one should always question if the reference data contain enough biological variation to meaningfully accomodate query data.
整合数据进行批次矫正
sc.tl.pca(adata_concat)
sc.external.pp.bbknn(adata_concat, batch_key='batch')
sc.tl.umap(adata_concat)
sc.pl.umap(adata_concat, color=['batch', 'louvain'])
bbknn的矫正方法还是可以的,但是有一个问题,不利于发现新的细胞类型
Density plot
sc.tl.embedding_density(adata_concat, groupby='batch')
sc.pl.embedding_density(adata_concat, groupby='batch')
Partial visualizaton of a subset of groups in embedding
for batch in ['1', '2', '3']:
sc.pl.umap(adata_concat, color='batch', groups=[batch])
需要更多的探讨