Monday, October 31, 2011

Title: Dopamine gating of glutamatergic sensorimotor and incentive motivational input signals to the striatum
Author: Jon C. Horvitz
Journal: Behavioural Brain Research
Year: 2002



  • Overview
    • Traditional assumption: REWARD STIMULI are unique in producing a PHASIC dopamine (DA) response; other stimuli produce much more gradual (necessarily TONIC?) DA responses.
    • Not assumed in this article
    • Forebrain DA activity signals neither the "unpleasantness nor the pleasantness of the event" that elicited it.
    • KEY IDEA: "Activation of substantia nigra (SN) and ventral tegmental area (VTA) DA neurons, and consequent release of nigrostriatal and mesolimbic DA to dorsal and ventral striatal target regions, modulates the processing of concurrent glutamate (Glu) inputs. [This] occurs under conditions of unexpected environmental change."
    • Main question: How does DA transmission in the dorsal and ventral striatum influence behavior?
      • What biological/environmental conditions lead to elevated DA activity?
      • What information is carried by Glu inputs to striatal neurons?
      • How are these Glu inputs modulated by DA activity?
    • What is the ventral striatum?
      • Nucleus accumbens
      • Caudate-putamen
      • Olfactory tubercle
    • Here the ventral striatum means nucleus accumbens.
    • Instead of thinking of DA as a 'chemical code' for reward, here it is argued that "environmentally elicited elevations in mesolimbic and nigrostriatal DA activity gate the input of reward signals to the striatum, just as they do for sensorimotor signals [...]"
      • Such reward signals are likely to originate in orbitofrontal cortex and basolateral amygdala.
    • What this means is that DA 'informs' striatal cells that an unexpected, important event has occurred.
  • DA neurons respond to salient unexpected events
    • It is unlikely that heightened attentional states are associated with a phasic DA response, because attentional systems would likely be recruited by both the presence of an unexpected event and the absence of an expected event. DA neurons increase activity only to the former, while they are inhibited by the latter.
    • Time Course
      • Very rapid onset
        • Phasic activation increase in approximately 50-100 ms after stimulus onset
    • It can't only be REWARD prediction error because of evidence showing that DA in prefrontal cortex (PFC) and nucleus accumbens (NA) is elevated under appetitive and aversive conditions.
    • Primary Aim: "[P]rovide a framework that accounts both for the promiscuous DA response to salient events and for the large body of evidence showing that DA disruptions attentuate the impact of rewards (and punishers) on several aspects of behavior and learning."
    • Important: DA may be considered a gatekeeper of glutamatergic information flow to the striatum.
  • DA selectively promotes the processing of strong glutamate inputs to the striatum
    • It turns out that DA acting at the D1 receptor increases the activity of glutamate at NMDA receptor sites. D2 however, does the opposite and reduces activity of glutamate at non-NMDA receptor sites.
  • DA activity gates the throughput of sensorimotor and incentive motivational inputs to the striatum
    • Striatal neurons receive inputs from a wide variety of 'types' of neurons. For example, the caudate, putamen, and ventral striatum contain neurons that respond to arm movements but only when the animal expects to receive a food reward following the movement. Conversely, very little activity is seen if a sound is expected to follow the movement.
    • Striatal neurons respond to very abstract types of information.
    • Ah, logic is so beautiful: [Individual striatal neurons] ... appear to be capable of representing the conjunction of two conditions[!]
      • Striatal neurons are the (and CONDITIONS ...) Lisp function!
    • What happens when nigrostriatal DA transmission is reduced?
      • Not surprisingly, input signals are less likely to produce a response.
      • Interestingly, when DA is depleted the striatal cell response is diminished but only in background activity NOT phasic activity. When DA is increased the phasic response is stronger than baseline and the background activity is actually reduced!
    • There's evidence that suggests that DA modulates both the striatal response to current glutamate inputs and log-term changes in synaptic strength of these inputs.
  • Striatal plasticity: stimulus-response learning, salience assignment to synaptic inputs, and/or stimulus-response-outcome chunking
    • Schultz thinks that DA signals provide the 'difference' between reward occurrence and reward prediction. "The phasic DA response increases the synaptic strength betwee ncurrently ative striatal input and output elements, increasing the future likelihood that the current set of corticostriatal inputs (reward) will activate striatal outputs (motor responses)." The problem with this hypothesis is that if DA additionally responds to aversive events then modulation of behavior via this mechanism would mean that behavioral responses leading to aversive outcomes would be increased in number. So, says JH, DA's primary function will not likely be one involving the strengthening of stimulus-response (S-R) connections.
    • So what does DA actually do then?
      • Well, maybe strengthening of corticostriatal inputs via DA serves to increase the future salience of an event (or series of events).
      • According to JCH this implies that additional processing of information is performed later in more specialized regions of basal ganglia. This would then imply that something downstream of the striatum is what is determining the reward value of a stimulus; it isn't indicated by the phasic DA response. He proposes frontal regions as the area that performs response selection.
      • It's an indicator for salient environmental change
        • Biologically this means that, due to the processes mentioned earlier, this change will cause an increase in "glutamate signal-to-noise ratios within dorsal and ventral striatal target sites"
  • Thoughts and Questions
    • What does it mean when 'a representation converges' onto a neuron?
      • How do non-physical things (representations) converge onto physical things (neurons)?

Friday, October 28, 2011

Testing out Google's code prettifier...
% a MATLAB function
function x = some_func()
fs = 1000;
d = 1;
t = 0:1 / fs:d;
x = sin(t);
ex = 0.1 * randn(size(x)) + x;
end
class Point(object):
    """
    """

    def __init__(self, x, y):
        """__init__

        Arguments:
        - `x`:
        - `y`:
        """
        self._x = x
        self._y = y

    @property
    def x(self):
        return self._x

    @x.setter
    def x(self, value):
        self._x = value

    @x.deleter
    def x(self):
        del self._x

Thursday, October 27, 2011

The Relationship Between Moments and Cumulants

The latest assignment in my biomedical signal processing class, followed by my modest attempt at the proof:

Show that
\begin{eqnarray*}
  c_{1} &=& m_{1}=\mu \\
  c_{2} &=& m_{2} - m_{1}^{2}=\sigma^{2}.
\end{eqnarray*}
That is, show that the first and second cumulants of a distribution are equal to the mean ($\mu$) and variance ($\sigma^{2}$), respectively.

Proof:
The $n$th moment of a continuous distribution, $m_{n}$ is defined as:
\begin{eqnarray*}
  m_{n} &=& E[x^{n}]\\
  &=& \int_{-\infty}^{\infty}x^{n}p\left(x\right)\,dx.
\end{eqnarray*}
A function that generates any moment is called a moment-generating function and is defined as $$E[e^{tX}].$$ where $X$ is a random variable and $E[f\left(X\right)]$ is the expected value of a function, $f$, of a random variable $X$.

If we expand the definition of $E[e^{tX}]$ we can see that
\begin{eqnarray*}
  E[e^{tX}] &=&
  \int_{-\infty}^{\infty}p\left(X\right)\left(\sum_{n=0}^{\infty}\frac{t^{n}X^{n}}{n!}\right)\,\,\mathrm{d}x\\
  &=& \int_{-\infty}^{\infty}p\left(X\right)\left(1 + tX + \frac{t^{2}X^{2}}{2!} + \cdots +
    \frac{t^{n}X^{n}}{n!}\right)\,\,\mathrm{d}x\\
  &=& 1 + tm_{1} + \frac{t^{2}m_{2}}{2!} + \cdots + \frac{t^{n}m_{n}}{n!}.
\end{eqnarray*}

If we evaluate the $n$th derivative of $E[e^{tX}]$ with respect to $t$ at $t=0$, that is, $$\left(\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}E[e^{tX}]\right)_{t=0} =
\frac{\mathrm{d}^{n}}{\mathrm{d}t^{n}}\left(m_{0} + tm_{1} +
  \frac{t^{2}m_{2}}{2!} + \cdots + \frac{t^{n}m_{n}}{n!}\right)$$ we get the $n$th moment of a probability distribution.

The $n$th cumulant, $c_{n}$, of a continuous distribution is defined as:
$$\ln\left(E[e^{tX}]\right).$$
Since the power series definition of the natural logarithm of $x$ is
$$\ln\left(x\right) = \sum_{n=1}^{\infty}\frac{\left(-1\right)^{n+1}}{n}\left(x - 1\right)^{n},$$
we can rewrite the definition of the $n$th cumulant, $c_{n}$, as
\begin{eqnarray*}
  \ln\left(E[e^{tX}]\right) &=&
  \sum_{n=1}^{\infty}\frac{\left(-1\right)^{n+1}}{n}\left(tm_{1} +
    \frac{t^{2}m_{2}}{2!} + \cdots\right)^{n}
\end{eqnarray*}
Expanding the terms from the previous equation we see that,
\begin{eqnarray*}
  \ln\left(E[e^{tX}]\right) &=& \left(tm_{1} +
    \frac{t^{2}m_{2}}{2!} + \cdots\right) - \frac{1}{2}\left(tm_{1} +
    \frac{t^{2}m_{2}}{2!} + \cdots\right)^{2} + \cdots\\
&=&\left(tm_{1} +
    \frac{t^{2}m_{2}}{2!} + \cdots\right)-\frac{1}{2}\left(t^{2}m_{1}^{2}+\frac{t^{3}m_{1}m_{2}}{2}+\cdots\right) + \cdots.
\end{eqnarray*}
Now do a little algebra on the second term of $\ln\left(E[e^{tX}]\right)$ to get
$$\left(tm_{1}+\frac{t^{2}m_{2}}{2!}+\cdots\right)-\frac{t^{2}m_{1}^{2}}{2}-\frac{t^{3}m_{1}m_{2}}{4}+\cdots$$

The first derivative, with respect to $t$, evaluated at 0 gives $m_{1}$. The second derivative evaluated at 0 gives

$$\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\left(\frac{t^{2}m_{2}}{2!}\right) =m_{2}.$$
Then we see that $m_{2} - m_{1}^{2}$ gives the variance.

Thus it is shown that the first two cumulants of a distribution are the mean and variance, respectively.

Monday, October 17, 2011

Quicksort

I finally understand the quicksort algorithm. Python helped me with this. Here's how it works:

import random

def quicksort(lst):
    """quicksort
    """
    n = len(lst)
     
    # if the list has 1 or no elements
    if n < 2:
        return lst
     
    # remove a random element and make it the pivot value
    pivot = lst.pop(random.randint(0, n - 1))
     
    # elements less than pivot
    less = filter(lambda x: x < pivot, lst)
     
    # elements not less than pivot
    greater = filter(lambda x: not x < pivot, lst)
     
    # recursively sort the items
    return quicksort(less) + [pivot] + quicksort(greater)
 
if __name__ == '__main__':
    lst = range(10)
    random.shuffle(lst)
    print 'unsorted: {lst}'.format(lst=lst)
    print 'sorted: {lst}'.format(lst=quicksort(lst))
This is the algorithm implemented in Python using recursion. With recursion it is important to identify a base case to avoid infinite loops and stack overflows. In this case our base case is 1 or fewer elements because both a list of 1 element and the empty list are both already sorted.
Now, if you uncomment the print statement and run the code you can see how it works.
Step 1:
Pick a random element from the input list. Let lst = [9, 4, 2, 8, 7, 6, 5, 3, 1, 10], let pivot = 5.
Step 2:
Partition all of the elements less than pivot into a list less. Do the same for the elements that are greater than pivot and call it greater.
Example:
less = [4, 2, 3, 1], greater = [9, 8, 7, 6, 10]
This results in the function returning quicksort([4, 2, 3, 1]) + [5] + quicksort([9, 8, 7, 6, 10])
So now we have [..., 5, ...]
Step 3:
Repeat steps 1 and 2 until you reach your base case, that is, there is either 0 or 1 elements in less and greater
Continuing the previous example:
The next iteration of the recursive call results in quicksort([]) + [1] + quicksort([4, 2, 3]).
Now we have [1, ..., 5, ... ]
Next iteration gives quicksort([]) + [2] + quicksort([4, 3])
The updated list is: [1, 2, ..., 5, ...]
Next iteration: quicksort([]) + [3] + quicksort([4])
Updated list: [1, 2, 3, 4, 5, ....]
Great! We've sorted half the list. Now going to the greater half...
Next iteration: quicksort([8, 7, 6]) + [9] + quicksort([10])
Updated list: [1, 2, 3, 4, 5, ..., 9, 10]
Next iteration: quicksort([]) + [6] + quicksort([8, 7])
Updated list: [1, 2, 3, 4, 5, 6, ..., 9, 10]
Next iteration: quicksort([]) + [7] + quicksort([8])
Updated list: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Whoo hoo! We've sorted this (rather boring) list! I hope this has been informative.

Sunday, October 2, 2011

TMS Perturbs Saccade Trajectories and Unmasks an Internal Feedback Controller for Saccades



Author(s): Minnan Xu-Wilson, Jing Tian, Reza Shadmehr, David S. Zee
Summary: Transcranial magnetic stimulation (TMS) centrally perturbs saccades and these saccades are corrected for the perturbation within the movement providing further support for internal feedback control of saccadic eye movements.
Question(s): Can we observe online correction of saccadic eye movements after a central perturbation via TMS?
Abbreviations: Postinhibitory Rebound (PIR)

  • Introduction
    • Saccades are known to be highly variable beasts.
      • However, they generally arrive on target (+- 1.0 deg)
    • Motor commands may depending on any or all of the following, but are not limited to:
      • Stimulus content
      • Stimulus predictability
      • Whether or not the stimulus covaries with the target of a reaching movement
    • Motivation
      • While arm movements can be perturbed via a force-field, it has been difficult to perform an analogous procedure on eye movements.
        • Current Methods (and their drawbacks)
          • Facial Stimulation
            • Painful
          • Loud Noise
            • Uncomfortable
            • Shows habituation
      • Let's use TMS!
    • Preliminary findings
      • "When a single pulse is applied immediately before or during a saccade, it engages a startle-like neural reflex that briefly alters the ongoing oculomotor commands, slowing or even transiently stopping the eye movement."
      • "Despite this perturbation, the movement is corrected with commands that arrive later in the same saccade, accurately steering the eyes close to the target even when the target stimulus is no longer visible."
  • Materials and Methods
    • n = 5, 3 M
    • Visually guided saccades
    • Materials
      • Saccades
        • Bite-bar (annoying)
        • Scleral search coil (probably painful)
        • fs = 1000 Hz
        • Filters
          • Low-pass 90 Hz Butterworth on eye position signals
          • 3rd order Savitzky-Golay applied to position signals to derive velocity and acceleration signals
        • 0.2 deg red laser beam projected 1 meter away from subjects
        • Used 500 Hz frame rate video and an EyeLink 1000 to corroborate findings from search coils
      • TMS
        • 2.2 Tesla
        • Stimulation strength: 50% - 60%
        • Stimulated Cz (using EEG 10-20 system coordinates)
    • Methods
      • 16 deg/s criteria for saccades
      • Exclusion criteria
        1. Amp < 67% of target displacement
        2. 100 ms > saccade reaction time > 500 ms
        3. Abnormal trajectories due to blinks
      • Pause criteria
        • 2 clear peaks in velocity profile
        • Local minima < 50 deg/s
    • Experiment 1: Saccade Onset
      • Fixate for 1500-2300 ms
      • TMS triggered when 30 deg/s velocity threshold reached
        • happened only on 67% of trials (determined probabilistically)
        • Confound? Only three out of five subjects were tested to determine whether the sound alone elicited a peturbation. Why not all five when in the introduction they state that this is a commonly used method to achieve the same end goal as their experiment?
    • Experiment 2: During the Saccade
      • Determine how timing of TMS affects saccade trajectories
      • Used oblique saccades because these would provide a larger time window for TMS to take effect.
      • Given on 70% of trials
      • at 5, 15, 25, 35, 45 and 55 ms after saccade onset (i.e., velocity threshold reached).
    • Experiment 3: Before the saccade
      • Assumed 180 ms saccade latency
      • Triggered TMS w.r.t. target onset randomly at 40, 60, or 80 ms before the expected saccade onset.
      • Analysis:
        • Grouped trials into bins according to the actual time of TMS before saccade onset.
  • Modeling
    • Ramat et al., 2005
    • Components
      • "Two coupled excitatory burst neuron (EBN) and inhibitory burst neuron (IBN) pairs..."
      • "Burst neurons fired at a rate that depends on the size of hte difference between the current estimate of eye position and the target position..."
      • "Motor error calculated by integrating the velocity output from the burst neurons and then subtracting this estimate of current eye position from the desired goal of the movement--the integration served as a state estimator, providing an ongoing internal feedback to the system..."
      • "The burst neurons' membranes were modeled as high-pass filters with adaptation"
  • Results
    • Experiment 1
      • In 74% of trials the TMS perturbed the saccade trajectory regardless of where the brain was stimulated
      • Always in the form of a pause in the velocity profile
      • Effected persisted for 32 ms before the eyes reaccelerated
    • No difference in slowing time of vertical and horizontal components of oblique saccades.
    • No habituation over sets of trials (a friend pointed out to me that this is like asking whether there was habituation of curare, but the authors reported the statistics used to test this effect so it seems like it's relevant somehow...)
    • Paused Saccades
      • The size of the compensatory movement was highly correlated with the remaining distance to the target.
      • Visual condition (whether target was blanked on saccade) made NO difference in the time it took for resumed movement to start nor the quality of compensation for error during the pause
    • Interestingly, the final amplitude of a perturbed saccade was, on average, larger than a control saccade by 0.83
    • Oddities
      • No-pause saccades (26%) elicited the following 2 properties:
        • Their amplitudes were generally smaller than control saccades by 0.84 degrees
        • Amplitudes were generally l.10 deg larger than the initial amplitudes of saccades that paused and resumed
    • Anti-saccades were tested because the effect may have been due to the involuntary nature of so-called "pro" saccades
    • During horizontal saccades no effect of eyelid perturbation was found on eye trajectory, thus the perturbation in the eye was caused by TMS not some interaction between the TMS and the eyelid.
    • Lid saccade pauses and eye saccade pauses are highly correlated
    • No evidence for head motion causing the observed effect
    • Experiment 2
      • TMS applied late in the time course of a saccade could stop the saccade altogether
Why are so few subjects used in this line of research? There are only 5 subjects in the study and most saccade studies do not have any more than that. In an experiment that a colleague of mine and I are working on we have 5-6 subjects per condition and that still seems low to me.
Interestingly, these guys found that saccades that were perturbed were slower yet hypermetric when compared with non TMS'ed saccades. They account for this by suggesting that during the pause period, the burst neurons are inhibited by reactivated OPNs, leading to greater firing rate of burst neurons after the pause period. This effect was not greater than the firing rates seen after saccades of the same size were made after a normal fixation. I don't really understand how the "inherent delay in the feedback loop does not allow for complete compensation of this overshoot" can account for the paradox of hypermetric saccades with smaller peak amplitudes than saccades not perturbed by TMS.

Phillip: Why is habituation not important?
Nate: It's possible that the authors are conflating the notion of stimulated the nervous system with TMS versus a stimulus evoking a response in the nervous system. Hmmm let's talk about this tomorrow.

Saturday, October 1, 2011

Three Programming Languages Compared

Let's take a look at three different programming languages--MATLAB, Python, and LUSH--and compare the ease of plotting with each as measured by the number of commands required to generate a simple sine wave on the interval [0, 2π] and save it to some relatively portable image format.



MATLAB:

t = 0:0.001:(2 * pi);
x = sin(t);
plot(t, x);
axis tight;
xlabel('$t$', 'interpreter', 'latex');
ylabel('$\sin(t)$', 'interpreter', 'latex');
title('$\sin(t)$', 'interpreter', 'latex');
legend({'$\sin(t)$'}, 'interpreter', 'latex');
set(gcf, 'PaperOrientation', 'landscape', 'PaperPosition', [0 0 11.5 8], ...
    'PaperSize', [11.5 8]);
saveas('temp', 'png');



Python using the IPython Shell:

t = arange(0, 2 * pi, 0.001)
x = sin(t)
plot(t, x)
xlabel('$t$')
ylabel('$\sin(t)$')
title('$\sin(t)$')
legend(['$\sin(t)$'])
savefig('temp', format='png')


LUSH:
(libload "libplot/plotter")
(defvar p (new Plotter))
(let ((w (ps-window "/home/cpcloud/Desktop/temp.ps")))
  (==> p PlotFunc "sin(t)" sin 0 +2pi+ 0.001 (alloccolor 0 0 1))
  (==> p SetXScale 0 +2pi+)
  (==> p SetYScale (- 1) 1)
  (==> p SetXLabel "t")
  (==> p SetYLabel "sin(t)")
  (==> p SetLegend "sin(t)")
  (==> p SetTitle "sin(t)")
  (==> p Redisplay))



;; not strictly necessary--it outputs a PS file without this which is perfectly fine
(sh "convert -density 150 -geometry 100% temp.png")


Clearly the winner here using the metric mentioned above is Python using the IPython shell. MATLAB  and LUSH are tied for second, however you might change your mind after seeing their respective plots.

Sadly, the plot created by LUSH leaves much to be desired and sends me running back to matplotlib and (GASP!) even MATLAB. LUSH's plotter isn't buggy per se, it's just not as refined as Python's or MATLAB's.

For some reason though, I still like programming in LUSH much more than programming in MATLAB. Lush is WAAAAY faster during matrix computations and the anonymous function syntax is much more forgiving, e.g., (lambda (x) (somefunction x)) versus @(x) somefunction(x). The ability to write C/C++ code mixed with LUSH code certainly doesn't hurt its coolness factor either.

I have to admit, though, when I took Yann LeCun's machine learning course at NYU I thought Lush was a crazy little language and all those parentheses were going to drive me insane!! Little did I know I would become slightly obsessed with it just under a year later...

Check out LUSH, it has a lot of potential!

* Next up: the LUSH code to make its plotting facility much more user-friendly. Stay tuned.

Dopamine and Learned Food Preferences


Authors: Anthony Sclafani, Khalid Touzani, and Richard J. Bodnar.
Summary: A review of the relationship between dopamine and learned flavor preferences in light of the current obesity epidemic.
Question: What role does dopamine (DA) play in conditioned flavor preferences?

Past questions:
  • Introduction
    • Study 1
      • Question(s):
        • Does the sweet taste of a noncaloric saccharin solution stimulate DA release?
        • Is this response altered in animals that developed a conditioned aversion to saccharin?
      • Result(s):
        • Intraoral infusions of saccharin stimulated nucleus accumbens (NAc) DA in naive rats, but the  same saccharin infusions significantly reduced DA release in rats previously conditioned to avoid the sweet solution by pairing it with lithium chloride (LiCl) injections.
      • Conclusion(s):
        • The NAc DA response is/was related to the positive reward quality of the saccharin solution, not to its sweet taste per se, or alternatively, its arousing properties.
    • Study 2
      • Question(s):
        • Does a conditioned taste preference increase NAc DA release, i.e., would it have the opposite effect of a conditioned taste aversion.
      • Method(s):
        • Two Groups
          • 20 hours/day training sessions
          • Experimental Pairings
            • Received IG Polycose infusions along with a bitter solution
            • Received H2O infusions along with a sour solution (citric acid)
          • Control
            • Received bitter and sour solutions sans IG infusions
      • Result(s):
      • Conclusion(s):
  • Systemic Studies
  • Central Studies

    • Caption for figure 1 here...

The Tar Files

Some construction workers decided it would a good idea to boil up some tar over an open flame, right next to the propane tank fueling said flame.

In other news, I've decided it would be a good idea to learn GNU Emacs.