论文标题
神经代码,神经数据分析和神经网络的组合几何形状
Combinatorial geometry of neural codes, neural data analysis, and neural networks
论文作者
论文摘要
本文探讨了离散几何形状在数学神经科学中的应用。我们从凸神经代码开始,该凸神经代码对海马放置细胞和其他具有凸感应场的神经元的活性进行了建模。在第4章中,我们介绍了订单构造,这是一种限制代码凸实现的工具,并使用它来构建没有局部障碍的非convex代码的新示例。在第5章中,我们将定向的矩阵与凸神经代码联系起来,表明代码具有凸多属的实现,如果FF,则是在神经代码形态下具有代表性的矩阵的图像。我们还表明,确定代码是否至少与确定方向的矩阵是否代表一样困难,这意味着确定代码是否为凸的问题是NP-HARD。接下来,我们转向矩阵基础等级的问题。这个问题是由确定(神经)数据的维度的问题引起的,该数据因未知单调转换而损坏。在第6章中,我们介绍了两个用于计算基础等级的工具,最小的节点和ra级。我们将其应用于分析幼虫斑马鱼的钙成像数据。在第7章中,我们更详细地探讨了基础等级,建立与定向的矩阵理论的联系,并表明计算基础等级也是NP-HARD。最后,我们研究阈值线性网络(TLNS)的动力学,这是神经回路活性的简单模型。在第9章中,我们描述了阈值线性网络的无效排列,并表明其子集的一部分是一个吸引人的集合。在第10章中,我们专注于组合阈值线性网络(CTLN),这些网络是根据有向图定义的TLN。我们证明,如果CTLN的图是一个有向的无环图,则CTLN的所有轨迹接近固定点。
This dissertation explores applications of discrete geometry in mathematical neuroscience. We begin with convex neural codes, which model the activity of hippocampal place cells and other neurons with convex receptive fields. In Chapter 4, we introduce order-forcing, a tool for constraining convex realizations of codes, and use it to construct new examples of non-convex codes with no local obstructions. In Chapter 5, we relate oriented matroids to convex neural codes, showing that a code has a realization with convex polytopes iff it is the image of a representable oriented matroid under a neural code morphism. We also show that determining whether a code is convex is at least as difficult as determining whether an oriented matroid is representable, implying that the problem of determining whether a code is convex is NP-hard. Next, we turn to the problem of the underlying rank of a matrix. This problem is motivated by the problem of determining the dimensionality of (neural) data which has been corrupted by an unknown monotone transformation. In Chapter 6, we introduce two tools for computing underlying rank, the minimal nodes and the Radon rank. We apply these to analyze calcium imaging data from a larval zebrafish. In Chapter 7, we explore the underlying rank in more detail, establish connections to oriented matroid theory, and show that computing underlying rank is also NP-hard. Finally, we study the dynamics of threshold-linear networks (TLNs), a simple model of the activity of neural circuits. In Chapter 9, we describe the nullcline arrangement of a threshold linear network, and show that a subset of its chambers are an attracting set. In Chapter 10, we focus on combinatorial threshold linear networks (CTLNs), which are TLNs defined from a directed graph. We prove that if the graph of a CTLN is a directed acyclic graph, then all trajectories of the CTLN approach a fixed point.