Advertisement
B4rr

Untitled

Nov 8th, 2015
232
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Latex 5.91 KB | None | 0 0
  1. \documentclass[10pt, a4paper]{scrartcl}
  2.  
  3. \usepackage{amsmath}
  4. \usepackage{amssymb}
  5. \usepackage{graphicx}
  6. \usepackage{blindtext}
  7. \usepackage[utf8]{inputenc}
  8. \usepackage{tikz}
  9. \usepackage{dsfont}
  10. \usepackage[english]{babel}
  11. \usepackage{fullpage}
  12. \usepackage{ wasysym }
  13. \usepackage{algpseudocode}
  14. \title{Randomized Algorithms and Probabilistic Methods: Graded Homework 2 }
  15. \author{Stefan Lochau - 12-913-752}
  16.  
  17.  
  18.  
  19. \newcommand\abs[1]{\left|#1\right|}
  20. \newcommand\pr[1]{\text{Pr}\left[#1\right]}
  21.  
  22. \begin{document}
  23. \maketitle
  24. \section*{Exercise 1}
  25.  
  26. We first define the probability space $\Omega=\prod\limits_{e\in E} \Omega_e$ as the product of the $\Omega_e=([k],\text{uniform distribution})$. We note that $\abs{E}\leq\frac{n \cdot d}{2}$. We see that the influence of the $i$-th coordinate $c_i$ on $X$ is at most $k$. Thus we get by applying Azuma's inequality:
  27.  
  28. $$ \text{Pr}\left[ \abs{X-\mathbb{E}(X)} \geq \alpha \sqrt{n} \right] \leq 2 e^{-\frac{\alpha^2 n}{2\sum\limits_{e\in E} k^2}}\leq 2 e^{-\frac{\alpha^2 n}{nd k^2}}= 2 e^{-\frac{\alpha^2}{d k^2}}\in 2 e^{-\omega(1)}\subset o(1)$$
  29. \section*{Exercise 2}
  30. \subsection*{a)}
  31. We first define $X_{v_i}$ as the indicator variables that $v_0$ informed $v_i$ in the first $64 \log(n)$ rounds, and their sum $X=\sum_{i=1}^n X_{v_i}$. Furthermore
  32. $$\mu:=\mathbb{E}\left[X\right]=\sum_{i=1}^n \text{Pr}(X_{v_i}=1) = \sum_{i=1}^n \left(1-\text{Pr}(X_{v_i}=0)\right) = n - n(1-\frac{1}{n})^{64\log(n)} $$
  33. We are interested in $\text{Pr}\left[ X \leq 16 \log(n) \right]$. As this can be interpreted as the "balls in the bins" problem we can easily see that the $X_{v_i}$ are negatively associated. Thus we can apply Chernoff and the inequality $1-x \geq e^{-x-x^2}$ for $x\in[0,1/2]$, and get:
  34. $$ \text{Pr}\left[ X \leq 16 \log(n) \right] = \text{Pr}\left[ X \leq (1- \frac{\mu-16\log(n)}{\mu})\mu \right] \leq e^{-\frac{\mu \delta^2}{2}}= e^{n-n(1-\frac{1}{n})^{64\log(n)}-16\log(n)} \leq$$
  35. $$e^{n-ne^{(-\frac{1}{n}-\frac{1}{n^2})64\log(n)}-16\log(n)} = e^{n-ne^{o(1)}-16\log(n)}=e^{O(1)} n^{-16} \leq c \cdot n^{-16}$$
  36. for some $c$ and $n$ large enough.
  37. \subsection*{b)}
  38. We are interested in $p:=\text{Pr[at least half of the informed nodes send the information to uninformed nodes]}$. For $k$ already informed nodes we can calculate the probability that at least $k/2$ of the already informed nodes inform other already informed nodes by $ \frac{\binom{k}{k/2} \left(\frac{k}{2}\right)^{k/2}}{n^k}$, and get $$p \geq 1-  \frac{\binom{k}{k/2} \left(\frac{k}{2}\right)^{k/2}}{n^k} \geq 1- \frac{(2ek/k)^{k/2} (k/n)^{k/2}}{n^k}=1-\left(\frac{2ek}{n}\right)^{k/2}$$
  39. \section*{Exercise 3}
  40. \subsection*{a)}
  41. We first note that the algorithm does return a valid vertex cover. We denote $C= \text{VertexCover}(G, \prec)$ and assume it is not a vertex cover, i.e. there is an edge $e$ for which $e \cap C = \emptyset$. However this is not possible as the for-loop went over $e$ and should have added both endpoints to $C$ in the iteration. As elements which are once put into $C$ stay there, this is a contradiction. We instantly get that $vc(G)\leq \abs{C}$.
  42. We note that for all sets of vertices  $C'$, we have that $\abs{C'\setminus C}+\abs{C'\cap}$
  43.  
  44.  
  45.  %%%%%%%%%% NEEDS STUFF
  46.  
  47.  
  48.  
  49.  \subsection*{b)}
  50. We need to show that ``$e$ is used to construct $C ``\Leftrightarrow ``\forall e'$ incident to $e$, $e' \prec e$, and $e'$ was not used''.
  51.  
  52. The left side is equivalent to ``$e \cap C = \emptyset$ in the iteration''.
  53. \subsubsection*{``$\Leftarrow$''}
  54. As no incident $e' \prec e$ was used to construct $C$, neither of the endpoints can have been added to $C$, so this is true.
  55. \subsubsection*{``$\Rightarrow$''}
  56. By contraposition: We see that if one edge $e' \prec e$ was used to construct $C$ the node incident to both was added in the $e'$ iteration. This means that in the $e$ iteration later, $e\cap C \neq \emptyset$ as desired.
  57.  
  58. $\hfill \square$
  59.  
  60.  
  61.  
  62. \subsection*{c)}
  63.  
  64. We see that starting from a fixed vertex there are at most $d^t$ possible paths of length $t$. With a uniformly selected ordering of the edges, the probability that a fixed path of length $t$  is decreasing, is $1/t!$. So we get by adding the probabilities of the two endpoints of our fixed edge $e$, that the probability of having a decreasing path of length $t$ starting in $e$ is at most $2d^t/t!$.
  65.  
  66. \subsection*{d)}
  67.  
  68. \section*{Exercise 4}
  69.  
  70. \subsection*{a)}
  71.  
  72. We abbreviate $\mathcal{H}(n \times k,p)$ by $\mathcal{H}$.
  73. We note that $\text{Pr}\left[ e \text{ isolated} \land e \in \mathcal{H} \right]=\pr{e \text{ isolated}| e \in \mathcal{H}} \pr{e \in \mathcal{H}}$. The second factor is equal to $p$, so we need to estimate the first one. There are $\sum\limits_{j=1}^{k-1} \binom{k}{j} (n-1)^j$ possible edges that share vertices with our distinctly selected edge $e$. They are not in the graph with probability $1-p$ independently, so we get that $$\pr{e \text{ isolated}| e \in \mathcal{H}}= (1-p)^{\sum\limits_{j=1}^{k-1} \binom{k}{j} (n-1)^j}\geq (1-p)^{kn^{k-1}+O(n^{k-2})}\geq e^{(-p-p^2)(kn^{k-1}+O(n^{k-2})} =$$
  74. $$e^{(-\frac{\delta}{n^k}-\frac{\delta^2}{n^{2k}})(kn^{k-1}+O(n^{k-2}))}\geq e^{-\delta k (1+O(n^-1))} \stackrel{n \text{ large enough}}{\geq} e^{-\delta k}-\beta\geq 1-2\beta$$
  75. Going back to the initial problem $\text{Pr}\left[ e \text{ isolated} \land e \in \mathcal{H} \right]$, we get the lower bound $p (1-2\beta)=\delta(1-2\beta)n^{-(k-1)}$ as desired.
  76. \subsection*{b)}
  77. We see that with the $\Omega_i$ as $Bernoulli(p)$ variables for each possible edge $e\in V_1\times \dots \times V_k$ and $J$ as a maximal matching, $\psi(r)=r$ satisfies the condition of Talagrand's inequality. Furthermore $c_i=1$ as required. Thus we can apply Lemma 8.2 and get together with $\mathbb{E}[X]+f(n) \sqrt{n} \in O(n)$:
  78. $$ \pr{\abs{X-\mathbb{E}[X]}\geq f(n) \sqrt(n)} = 4 e^{-\Omega\left( \frac{f(n)^2 n}{\mathbb{E}[X]+f(n) \sqrt{n}}\right)}=4 e^{-\Omega\left( f(n)^2 \right)}$$
  79. $\hfill \square$
  80.  
  81. \end{document}
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement