• $x$: Data we already have
  • $z$: Data we want to generate
  • $\theta_g$: parameter for Generator
  • $\theta_d$: parameter for D
  • $G$: Generator
  • $D$: Discriminator

target is training $G$ to minimize

For player $D$, want V bigger by

  • making more accurate estimate on real date x as $D(x)\rightarrow 1$
  • discriminate the fake data by $D(G(z)) \rightarrow 0$ equals to $(1 - D(G(z))) \rightarrow 1$
  • more accurate the $D$ is ,the larger value of $V$ can be

For player $G$, want V smaller by

  • enlarge $D(G(z))\rightarrow 1$ for $(1 - D(G(z))) \rightarrow 0$
  • better fake of $G$, $G(z)\rightarrow 1$, and less value of $V$

Algorithm

for number of iterations do

for $k$ steps do

  • Sample $m$ noise samples $\{ z^{(1)},…,z^{(m)} \}$ from noise prior $p_g(z)$
  • Sample $m$ examples $\{ x^{(1)},…,x^{(m)} \}$ from data generating distribution $p_{data}(x)$
  • Update the discriminator by ascending its stochastic gradient :

end for

  • Sample $m$ noise samples $\{ z^{(1)},…,z^{(m)} \}$ from noise prior $p_g(z)$
  • Update the generator by descending its stochastic gradient :

end for

  ew
  - q
  - q