### November 2, 2017

#### Speaker: Andrea Zubac (York U)

#### Nonlinear statistical filtering for noise removal in radar target tracking

The common usage of the word "filter" refers to a device that removes unwanted components from a mixture, such as a cigarette filter reducing the number of fine particles inhaled with tobacco smoke. Similarly, in signal processing, a filter refers to a process that removes unwanted components from a signal. Often, a signal processing filter removes a range of frequencies from a signal. It "filters out" unwanted frequency components. A statistically defined filter, however, filters out noise from a noisy signal. The first statistically defined filter to be described was the Wiener filter, developed by Norbert Wiener during the 1940's. It paved the way for other statistically defined filters to be introduced, including the Kalman filter, which is the focus of this talk. I will start with a brief description of the Wiener filter, then describe and show an implementation of the (linear) Kalman filter. Next, I will show three nonlinear filters based on the Kalman filter: the extended Kalman filter, the second order nonlinear filter, and the Monte Carlo simulation filter. Finally, I will show results of a simulation study comparing these filters applied to a nonlinear radar target tracking problem.

### October 19, 2017

#### Speaker: Allysa Lumley (York U)

#### Explicit results on Chebyshev functions: A prime counting adventure

Consider the function $\pi(x)=\{n\le x : n \text{ is prime}\}$. Legendre conjectured that $\pi(x)\sim \frac{x}{\log x}$. Nearly 100 years later, in 1896, Hadamard and de la Vallée Poussin proved an equivalent statement by considering the weighted prime counting function $\psi(x)$. In 1962, Rosser and Schoenfeld gave a method to explicitly estimate the error term in the approximation of $\psi(x)$. This result relies on information concerning the non-trivial zeros of the Riemann zeta function $\zeta(s)$ and subsequent numerical improvements to this information also translated into improved estimates for $\psi(x)$.

In this talk, we present various new explicit methods such as introducing some smooth weights and establishing some zero density estimates for the Riemann zeta function. Additionally, we will discuss results using these new techniques for finding primes in short intervals and the distribution of primes in arithmetic progressions.

### October 12, 2017

#### Speaker: Sergio Garcia Balan (York U)

#### Problems on D-spaces, Menger spaces and the star versions of the Menger property

A topological space $X$ is a $D$-*space* if for every neighborhood assignment $\{N(x):x \in X\}$ (that is, $N(x)$ is an open neighborhood of $x$ for each $x \in X$), there is a closed discrete subset $D$ of $X$ such that $\{N(x):x \in D\}$ is a cover of $X$. A topological space $X$ is *Menger* if for each sequence $\{\mathcal{U}_n:n\in\omega\}$ of open covers of $X$, there is a sequence $\{\mathcal{V}_n:n\in\omega\}$ such that for each $n\in\omega$, $\mathcal{V}_n$ is a finite subset of $\mathcal{U}_n$ and $\{\bigcup\mathcal{V}_n:n\in\omega\}$ is an open cover of $X$. Todd Eisworth states: "There are certainly some mathematical questions that arouse the curiosity of almost anyone who comes in contact with them, questions that tempt with the simplicity of of their formulation, tantalize with promises of an elegant solution if only one can look at the problem in just the right way, and taunt with the number of excellent mathematicians who have examined the question in the past and failed to solve it. The theory of $D$-spaces is replete with such questions."

I would like to discuss with you some problems related with $D$-spaces, Menger spaces, and the star versions of the Menger property.

### October 5, 2017

#### Speaker: Masoud Ataei (York U)

#### Title: Artificial Intelligence and Analysis of Non-stationary Spatial-temporal Data

In this talk, I will first give a brief introduction to the area of Artificial Intelligence (AI) in which the main differences between Strong AI and Weak AI, and some of the mathematical as well as philosophical theories formulated for describing each of these subareas will be discussed. I will further review the advances in Machine Learning research and current state-of-the-art in Deep Learning, Statistical Learning and Tensor Decomposition-based Learning methods. The rest of the talk will focus on the main challenges and difficulties confronted when analyzing the non-stationary spatial-temporal data using the machine learning approach. As a real-world example of such a complex data, electroencephalogram recordings of patients having major depressive disorder will be demonstrated on which I will share some of my recent research achievements. Our proposed framework utilized for analyzing the mentioned data involves the use of various techniques developed in statistics, operations research, machine learning and big data

### September 28, 2017

#### Speaker: Nathan Gold (York U)

#### Change-point detection for noisy non-stationary biological signals

Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, non-stationary time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We proposed a novel, real-time change point detection method for effectively extracting important time points in non-stationary, noisy time series. We validated our approach with three simulated time series, as well as with a physiological data set of simulated labour experiments in fetal sheep. Our methodology allows for the first time the detection of fetal acidemia from changes in the fetus' heart rate variability, rather than traditional invasive methods. We believe that our method demonstrates a first step towards the development of effective, non-invasive real time monitoring during labour from signals which may be easily collected.

### September 21, 2017

#### Kick-off Problem Session

This week we are having audience members introduce a problem they are interested in studying. Each person will take only 5 minutes to give the problem statement and some of the tools used. This session will act as a preview for what may appear in the upcoming weeks. Afterward we will try to fill up the time slots with volunteer speakers.