Notes for 'Probabilistic robotics' Chapter 2 (Sebastian THRUN)

jiayin

May 21, 2020 21:05 Engineering

The reading notes for the book 'Probabilistic robotics' (Sebastian THRUN). It is a summary of my understanding on the book. I hope it would help others. This note is for chapter 2, which mainly focused on introducing the basic and key concepts in probability. And it is quite useful in robotics.

Probabilistic robotics (Chapter 2)

April 14, 2020

Jiayin Xie

 

Key words:

 

Related papers:

 

Key concepts: 

  1. Probability theory

 

  • Random variables: a function without input and randomly output some value
    • Expectation
    • Covariance
    • entropy

 

  • Probability function: a function that measures the probability of random variables that take on some value.

 

  • Probability density function: in continuous case, the probability at a specific value is close to zero. Thus, we define the density function for a specific value. 

 

  • Joint probability: the probability that two events happen jointly.

We use multivariate density function to describe the joint probability density.

  • Conditional probability: given the fact that event A happens, the probability that event B happens.

Conditions on a random variable: P(A|B=x). 

 

Independent events: p(A|B) the probability of event A given event B is same is probability of event A

 

  • Theorem of total probability: given the conditional probability, we try to integrate and get the total probability of one event.

 

  • Bayes rule: given the conditional probability p(x|y), try to compute the inverse. In probabilistic robotics, we are interested in inferring the state x from the observation or sensored data y:

p(x|y)=p(y|x)p(x)p(y)

Here, the p(y|x)is called the generative model, it is the density function for y assuming x. Or in other words, it can produce the data given the x. p(x) is called the prior probability distribution, and p(x|y) is called the posterior probability distribution.

 

  • Conditional independence: It applies whenever a variable y carries no information about a variable x if another variable z’s value is known.

since p(x,y|z) = p(x|z)p(y|z)

thus p(x|z) =p(x|y,z)

p(y|z) = p(y|x,z)

 

  1. How to describe the interaction between robots and the environment?
  • We use state to describe the aspects of robots and environments that can impact the future.   xt

 

  • Markov chain: no variables prior to x_t would influence the stochastic evolution of future states, unless this dependence is mediated through the state x_t.
  • Measurement: zt
  1. Hidden markov model

 

  • We use the probabilistic generative laws to govern the evolution of state and measurements. The probabilistic generative laws for the state is given by a probability distribution:

p(xt |x0:t-1,z1:t-1,u1:t)


 

  • If the state xt-1is complete then it is a sufficient summary of all that happened in previous time steps. Thus, it is sufficient to predict the state xt:

p(xt |x0:t-1,z1:t-1,u1:t) = p(xt|xt-1,ut)

                       The equations above can also be understanded as a conditional independence.

 

  • Similarly, we have generative laws for the measurements:

p(zt|x0:t,z1:t-1,u1:t)=p(zt|xt) 

  1. Belief distribution

Q: We have already defined the state and measurements for the robots and the environment. Why do we need a concept called belief? 

 

A: Since the true state can never be measured directly, and thus we define the concept called the belief to distinguish the true state from ite internal belief.

 

  • The belief over a state variable xt is denoted by bel(xt), which is an abbreviation for the posterior 

bel(xt)=p(xt|z1:t,u1:t)

  • If we calculate the posterior before incorporating zt, just after executing the control ut, we get:

bel(xt)=p(xt|z1:t-1,u1:t)

Calculating bel(xt)from bel(xt) is called correction or measurement update.


 

Bayes filter, a general rule of thumb for calculating beliefs:

  • The bayes filter is a recursive algorithm and the goal is to solve the bel(xt). There are three components in this filter: 
    • The initial belief bel(xo)
    • The measurement probability p(zt|x0:t,z1:t-1,u1:t)
    • The state transition probability p(xt|x0:t-1,z1:t-1,u1:t)
  • Markov assumption: If we assume the state is complete, i.e., if we know xt-1, past measurements and controls convey no information regarding the state xt.

This gives us the following conditional independence:

  • p(zt|x0:t,z1:t-1,u1:t)= p(zt|xt)
  • p(xt|x0:t-1,z1:t-1,u1:t)=p(xt|xt-1,ut)
  • Before we get bel(xt), we need to compute the prediction bel(xt)from bel(xt-1) based on the theorem of total probability:  

 

bel(xt)=p(xt|ut)=p(xt|xt-1,ut)p(xt-1|ut)dxt-1

here , we let bel(xt-1)=p(xt-1|ut-1), (note that we omit the ut) and we get 

bel(xt)=p(xt|xt-1,ut)bel(xt-1)dxt-1

  • Then, we utilize the bayes rule to compute the p(xt|zt,ut):

p(xt|zt,ut)=p(zt|xt,ut)p(xt|ut)


 

Gaussian filters 

Q: What is the multivariate density function?

 

Q: why does the covariance matrix measure the linear relationship between variables?

 

Q: Why is gaussian distribution suitable for representing the belief ?

 

Q: What are the moments of a distribution? Are they important? 


 

  • Moments representation
  • Natural or canonical representation

 

  • Linear Gaussian Systems describes the next state probability p(xt|ut,xt-1) must be a linear function in its arguments with added gaussian noise.  Since the posterior in continuous space is represented by probability density function. Since it is a function, thus, xtis the variable and ut, xt-1is the given arguments.

xt=Atxt-1+Btut+t











 

Share this blog to:

764 views,
1 likes, 1 comments

Login to comment

Oscar 2020-05-21 21:10:

Great post in robotics!