<mediaitemtitle="Analysis of randomized algorithms via the probabilistic method">

<abstract>

<p>

In this talk, we will give a few examples that illustrate the basic method and show how it can be used to prove the existence of objects with desirable combinatorial properties as well as produce them in expected polynomial time via randomized algorithms. Our main goal will be to present a very slick proof from 1995 due to Spencer on the performance of a randomized greedy algorithm for a set-packing problem. Spencer, for seemingly no reason, introduces a time variable into his greedy algorithm and treats set-packing as a Poisson process. Then, like magic, he is able to show that his greedy algorithm is very likely to produce a good result using basic properties of expected value.

</p>

<p>

The probabilistic method is an extremely powerful tool in combinatorics that can be

used to prove many surprising results. The idea is the following: to prove that an

object with a certain property exists, we define a distribution of possible objects

and use show that, among objects in the distribution, the property holds with

non-zero probability. The key is that by using the tools and techniques of

probability theory, we can vastly simplify proofs that would otherwise require very

complicated combinatorial arguments.

</p>

<p>

As a technique, the probabilistic method developed rapidly during the latter half of

the 20th century due to the efforts of mathematicians like Paul Erdős and increasing

interest in the role of randomness in theoretical computer science. In essence, the

probabilistic method allows us to determine how good a randomized algorithm's output

is likely to be. Possibly applications range from graph property testing to

computational geometry, circuit complexity theory, game theory, and even statistical

physics.

</p>

<p>

In this talk, we will give a few examples that illustrate the basic method and show

how it can be used to prove the existence of objects with desirable combinatorial

properties as well as produce them in expected polynomial time via randomized

algorithms. Our main goal will be to present a very slick proof from 1995 due to

Spencer on the performance of a randomized greedy algorithm for a set-packing

problem. Spencer, for seemingly no reason, introduces a time variable into his

greedy algorithm and treats set-packing as a Poisson process. Then, like magic,

he is able to show that his greedy algorithm is very likely to produce a good

result using basic properties of expected value.

</p>

<p>

Properties of Poisson and Binomial distributions will be applied, but I'll remind

everyone of the needed background for the benefit of those who might be a bit rusty.

Stat 230 will be more than enough. Big O notation will be used, but not excessively.

</p>

</abstract>

<presentor>Elyot Grant</presentor>

<thumbnail

<mediafile

<flvfile

</mediaitem>

not finished encoding

<mediaitemtitle="Machine learning vs human learning - will scientists become obsolete?">

<abstract><p></p></abstract>

<presentor>Dr. Shai Ben-David</presentor>

<thumbnail

<mediafile

<flvfile

</mediaitem>

-->

<mediaitemtitle="How to build a brain: From single neurons to cognition">

<abstract>

<p>

Theoretical neuroscience is a new discipline focused on constructing mathematical models of brain function. It has made significant headway in understanding aspects of the neural code. However, past work has largely focused on small numbers of neurons, and so the underlying representations are often simple. In this talk I demonstrate how the ideas underlying these simple forms of representation can underwrite a representational hierarchy that scales to support sophisticated, structure-sensitive representations.

</p>

<p>

Theoretical neuroscience is a new discipline focused on constructing

mathematical models of brain function. It has made significant

headway in understanding aspects of the neural code. However,

past work has largely focused on small numbers of neurons, and

so the underlying representations are often simple. In this

talk I demonstrate how the ideas underlying these simple forms of

representation can underwrite a representational hierarchy that

scales to support sophisticated, structure-sensitive

representations. I will present a general architecture, the semantic

pointer architecture (SPA), which is built on this hierarchy

and allows the manipulation, processing, and learning of structured

representations in neurally realistic models. I demonstrate the

architecture on Progressive Raven's Matrices (RPM), a test of

BareMetal is a new 64-bit OS for x86-64 based computers. The OS is written entirely in Assembly, while applications can be written in Assembly or C/C++. High Performance Computing is the main target application.

</p>

</abstract>

<presentor>Ian Seyler, Return to Infinity</presentor>

<mediaitemtitle="A Brief Introduction to Video Encoding">

<abstract>

<p>

In this talk, I will go over the concepts used in video encoding (such as motion estimation/compensation, inter- and intra- frame prediction, quantization and entropy encoding), and then demonstrate these concepts and algorithms in use in the MPEG-2 and the H.264 video codecs. In addition, some clever optimization tricks using SIMD/vectorization will be covered, assuming sufficient time to cover these topics.

</p>

<p>

With the recent introduction of digital TV and the widespread success

of video sharing websites such as youtube, it is clear that the task

of lossily compressing video with good quality has become important.

Similarly, the complex algorithms involved require high amounts of

optimization in order to run fast, another important requirement for

any video codec that aims to be widely used/adopted.

The CSC is happy to be hosting Jeff Potter, author of "Cooking for Geeks" for a presentation on the finer arts of food science.

Jeff's book has been featured on NPR, BBC and his presentations have wowed audiences of hackers & foodies alike.

We're happy to have Jeff joining us for a hands on demonstration.

</p>

<p>

But you don't have to take our word for it... here's what Jeff has to say:

</p>

<p>

Hi! I'm Jeff Potter, author of Cooking for Geeks (O'Reilly Media, 2010), and I'm doing a "D.I.Y. Book Tour" to talk

about my just-released book. I'll talk about the food science behind what makes things yummy, giving you a quick

primer on how to go into the kitchen and have a fun time turning out a good meal.

Depending upon the space, I’ll also bring along some equipment or food that we can experiment with, and give you a chance to play with stuff and pester me with questions.

We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected by shared cells through which they communicate. Each machine continuously examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expression-oriented and constraint-based programming; it also easily accommodates implicit incremental distributed search in ordinary programs.

This work builds on the original research of Guy Lewis Steele Jr. and was developed more recently with the help of Chris Hanson.

<mediaitemtitle="Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily Formulated Ideas">

<abstract>

<p>

I have stolen my title from the title of a paper given by Marvin Minsky in the 1960s, because it most effectively expresses what I will try to convey in this talk.

</p>

<p>

We have been programming universal computers for about 50 years. Programming provides us with new tools to express ourselves. We now have intellectual tools to describe "how to" as well as "what is". This is a profound transformation: it is a revolution in the way we think and in the way we express what we think.

</p>

<p>

For example, one often hears a student or teacher complain that the student knows the "theory" of some subject but cannot effectively solve problems. We should not be surprised: the student has no formal way to learn technique. We expect the student to learn to solve problems by an inefficient process: the student watches the teacher solve a few problems, hoping to abstract the general procedures from the teacher's behavior on particular examples. The student is never given any instructions on how to abstract from examples, nor is the student given any language for expressing what has been learned. It is hard to learn what one cannot express. But now we can express it!

</p>

<p>

Expressing methodology in a computer language forces it to be unambiguous and computationally effective. The task of formulating a method as a computer-executable program and debugging that program is a powerful exercise in the learning process. The programmer expresses his/her poorly understood or sloppily formulated idea in a precise way, so that it becomes clear what is poorly understood or sloppily formulated. Also, once formalized procedurally, a mathematical idea becomes a tool that can be used directly to compute results.

</p>

<p>

I will defend this viewpoint with examples and demonstrations from electrical engineering and from classical mechanics.