Advertisement
Guest User

Untitled

a guest
Jan 10th, 2019
151
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 50.78 KB | None | 0 0
  1. \documentclass[a4paper, 12pt]{article}
  2.  
  3. \usepackage{amsmath}
  4. \usepackage[total={17cm,25cm}, top=2.5cm, left=2cm, includefoot]{geometry}
  5. \usepackage{enumitem}
  6. \usepackage[T1]{fontenc}
  7. \usepackage{hyperref}
  8. \usepackage{listings}
  9. \usepackage{color}
  10. \usepackage{graphicx}
  11. \usepackage{titlesec}
  12.  
  13. \newcommand{\subsubsubsection}[1]{\paragraph{#1}\mbox{}\\}
  14. \setcounter{secnumdepth}{4}
  15. \setcounter{tocdepth}{4}
  16.  
  17. \setlist[itemize]{topsep=0pt}
  18. \setlength{\itemindent}{0cm}
  19. \setlength{\parskip}{9pt}
  20. \setlength\parindent{0pt}
  21.  
  22. \lstset{
  23. language=Python,
  24. basicstyle=\fontsize{10}{12}\sffamily,
  25. numbers=left,
  26. numberstyle=\tiny,
  27. frame=tb,
  28. tabsize=3,
  29. columns=fixed,
  30. showstringspaces=false,
  31. showtabs=false,
  32. keepspaces,
  33. commentstyle=\color{red},
  34. keywordstyle=\color{blue}
  35. }
  36.  
  37. \begin{document}
  38.  
  39. \begin{titlepage}
  40. \begin{center}
  41. \vspace*{2cm}
  42. \Huge \textbf{Robotics Simplified}\\
  43. \normalsize \vspace{0.5cm}
  44. Generated \today
  45. \end{center}
  46. \end{titlepage}
  47.  
  48. \tableofcontents
  49. \newpage
  50.  
  51.  
  52. \section{Preface}
  53.  
  54.  
  55. Although you don't need to know any robotics before reading through this website, there are some things that can make your learning experience much more pleasant.
  56.  
  57.  
  58. \subsection{Programming Language}
  59. Understand basic concepts of programming will definitely come in handy before reading through the project. There are lots of great resources for learning programming:
  60. \begin{itemize}
  61. \item \href{https://www.reddit.com/r/learnprogramming/}{Reddit r/learnprogramming}.
  62. \item \href{https://www.codecademy.com/}{Codeacademy} and their \href{https://www.codecademy.com/learn/learn-python-3}{Learn Python 3} course.
  63. \item \href{https://projecteuler.net/}{Project Euler} for practicing programming on fun math-based problems.
  64. \end{itemize}
  65. Knowing syntax of Python would also be helpful, since all of the code examples discussed in the project are written in Python. If you don't know much about Python, but already know how to program in a different language, \href{http://histo.ucsf.edu/BMS270/diveintopython3-r802.pdf}{Dive into Python 3} is a great place to start.
  66.  
  67. It is also recommended to know a little about \href{https://en.wikipedia.org/wiki/Object-oriented_programming}{object-oriented programming}, since we will be basing most of the programs on objects of various classes.
  68.  
  69. \textbf{If you don't know anything about programming} (and don't have the time to learn) \textbf{but are interested in learning about robotics anyway, you can still read through the chapters.} The code is there mainly as examples to those interested in implementations of the discussed concepts.
  70.  
  71.  
  72. \subsection{Libraries and Classes}
  73. Throughout the project, there will be a lot of made-up classes like \texttt{Motor}, \texttt{Joystick} and \texttt{Gyro}. They are only used as placeholders for real classes that you would (likely) have, if you were implementing some of the concepts covered on this website on a platform of your choosing.
  74.  
  75.  
  76. \subsection{About comments}
  77. There are a LOT of comments in the examples of code that you are about to see. Way, way more than there should be. I agree that this is considered bad practice for practical purposes and that if you are a relative beginner, you shouldn't write code in a similar fashion.
  78.  
  79. However, since the purpose of the code on this website is educational and not to serve as a part of a codebase, I would argue that it is fine to use them to such extent.
  80.  
  81.  
  82. \subsection{Running the code}
  83. All of the code on this website has been tested on a \href{https://www.vexrobotics.com/vexedr}{VEX EDR} robot programmed in Python using \href{https://www.robotmesh.com/}{RobotMesh}. If you want to try out the code yourself, doing the same would be the easiest way - all you'd have to do is substitute the made-up classes and methods for real ones from the vex library and run the code.
  84.  
  85. If you don't have a VEX EDR robot at your disposal, an alternative is to use methods from \href{https://github.com/xiaoxiae/Robotics-Simplified/blob/master/Code/algorithms/utilities.py}{\texttt{utilities.py}} to test out the values the objects/methods return.
  86.  
  87.  
  88.  
  89.  
  90.  
  91. \section{Drivetrain Control}
  92.  
  93.  
  94. Algorithms and techniques used to control the motors of the robot's \textbf{\href{https://en.wikipedia.org/wiki/Drivetrain}{drivetrains}}.
  95.  
  96.  
  97. \begin{figure}
  98. \centering
  99. \includegraphics[width=0.8\textwidth]{../assets/images/drivetrain-control/drivetrain.png}
  100. \caption{Drivetrain}
  101. \end{figure}
  102.  
  103.  
  104. \href{https://pictures.topspeed.com/IMG/crop/201604/2017-audi-tt-rs-44_1600x0w.jpg}{Drivetrain image source}
  105.  
  106.  
  107.  
  108. The drivetrain of a vehicle is a group of components that deliver power to the driving wheels, hold them together and allow them to move. Here is a \href{http://www.simbotics.org/resources/mobility/drivetrain-selection}{resource} describing various types of drivetrains.
  109.  
  110. For the purpose of this guide, however, we will only be discussing some of the most frequently used drivetrains and the methods to operate them.
  111.  
  112.  
  113.  
  114.  
  115.  
  116. \subsection{Tank drive}
  117. Tank drive is a method of controlling the motors of a robot using two axes of a controller, where each of the axes operates motors on one side of the robot (see image below or \href{https://www.youtube.com/watch?v=vK2CGj8gAWc}{a video}).
  118.  
  119. \begin{figure}
  120. \centering
  121. \includegraphics[width=0.8\textwidth]{../assets/images/drivetrain-control/tank-drive.png}
  122. \caption{Tank Drive}
  123. \end{figure}
  124.  
  125.  
  126. \href{https://grabcad.com/library/wild-thumper-6wd-chassis-1}{Robot CAD model}, \href{https://target.scene7.com/is/image/Target/GUEST_1e4c1fcb-6962-4533-b961-4e760355db27?wid=488&hei=488&fmt=pjpeg}{Controller image source}
  127.  
  128.  
  129.  
  130. \subsubsection{Implementation}
  131. Suppose that we have objects of the \texttt{Motor} class that set the speed of the motors by calling them with values from -1 to 1. We also have a \texttt{Joystick} object that returns the values of the axes $y_1$ and $y_2$.
  132.  
  133. Implementing tank drive is really quite straightforward: simply set the left motor to whatever the $y_1$ axis value is, and the right motor to whatever the $y_2$ axis value is:
  134.  
  135. \begin{lstlisting}
  136. def tank_drive(l_motor_speed, r_motor_speed, left_motor, right_motor):
  137. """Sets the speed of the left and the right motor."""
  138. left_motor(l_motor_speed)
  139. right_motor(r_motor_speed)
  140. \end{lstlisting}
  141.  
  142.  
  143. \subsubsection{Examples}
  144. Here's a program that makes the robot drive using the values from the joystick:
  145.  
  146. \begin{lstlisting}
  147. # create robot's motors and the joystick
  148. left_motor = Motor(1)
  149. right_motor = Motor(2)
  150. joystick = Joystick()
  151.  
  152. # continuously set motors to the values on the axes
  153. while True:
  154. # get axis values
  155. y1 = joystick.get_y1()
  156. y2 = joystick.get_y2()
  157.  
  158. # drive the robot using tank drive
  159. tank_drive(y1, y2, left_motor, right_motor)
  160. \end{lstlisting}
  161.  
  162.  
  163. \subsubsection{Closing remarks}
  164. Tank drive is a very basic and easy way to control the robot. When it comes to FRC, it is a frequently used method for its simplicity, and because it is easier for some drivers to control the robot this way, compared to the other discussed methods.
  165.  
  166.  
  167.  
  168.  
  169.  
  170. \subsection{Arcade drive}
  171. Arcade drive is a method of controlling the motors of a robot using two axes of a controller, where one of the axes operates the "turning" component of the robot, and one the "driving" component of the robot (as if you were playing a game at an \textit{arcade}).
  172.  
  173. \begin{figure}
  174. \centering
  175. \includegraphics[width=0.8\textwidth]{../assets/images/drivetrain-control/arcade-drive.png}
  176. \caption{Arcade Drive}
  177. \end{figure}
  178.  
  179.  
  180. \href{https://grabcad.com/library/wild-thumper-6wd-chassis-1}{Robot CAD model}, \href{https://target.scene7.com/is/image/Target/GUEST_1e4c1fcb-6962-4533-b961-4e760355db27?wid=488&hei=488&fmt=pjpeg}{Controller image source}
  181.  
  182.  
  183.  
  184. \subsubsection{Implementation}
  185. Suppose that we have a joystick with an $x$ (horizontal) and a $y$ (vertical) axis.
  186.  
  187. There are many ways to get to the resulting arcade drive equations (for example by using \href{https://www.chiefdelphi.com/media/papers/download/3495}{linear interpolation} for all of the 4 quadrants of the joystick input).
  188.  
  189. Here is the implementation we get by splitting the values into quadrants:
  190.  
  191. \begin{lstlisting}
  192. def arcade_drive(rotate, drive, left_motor, right_motor):
  193. # variables to determine the quadrants
  194. maximum = max(abs(drive), abs(rotate))
  195. total, difference = drive + rotate, drive - rotate
  196.  
  197. # set speed according to the quadrant that we're in.
  198. if drive >= 0:
  199. if rotate >= 0: # I quadrant
  200. left_motor(maximum)
  201. right_motor(difference)
  202. else: # II quadrant
  203. left_motor(total)
  204. right_motor(maximum)
  205. else:
  206. if rotate >= 0: # IV quadrant
  207. left_motor(total)
  208. right_motor(-maximum)
  209. else: # III quadrant
  210. left_motor(-maximum)
  211. right_motor(difference)
  212. \end{lstlisting}
  213.  
  214.  
  215. \subsubsection{Examples}
  216. Here's a program that makes the robot drive using the values from the joystick:
  217.  
  218. \begin{lstlisting}
  219. # create robot's motors and the joystick
  220. left_motor = Motor(1)
  221. right_motor = Motor(2)
  222. joystick = Joystick()
  223.  
  224. # continuously set motors to the values on the axes
  225. while True:
  226. # get axis values
  227. x = joystick.get_y1()
  228. y = joystick.get_y2()
  229.  
  230. # drive the robot using tank drive
  231. arcade_drive(x, y, left_motor, right_motor)
  232. \end{lstlisting}
  233.  
  234.  
  235. \subsubsection{Closing remarks}
  236. If you are interested in reading more about this topic, I would suggest looking at \href{https://www.chiefdelphi.com/media/papers/2661}{this thread on Chief Delphi}, where I learned most of the information about the theory behind arcade drive.
  237.  
  238.  
  239.  
  240.  
  241.  
  242. \section{Motor controllers}
  243.  
  244.  
  245. Autonomous control of the robot using \href{https://en.wikipedia.org/wiki/Motor_controller}{motor controllers}.
  246.  
  247.  
  248.  
  249. It's nice to be able to drive the robot around using a joystick, but it would sometimes be more useful if the robot could drive \textbf{autonomously} (without any external assistance). This is where controllers come in.
  250.  
  251. A controller is essentially a box that takes in information about the robot and a goal that we want the robot to achieve (like drive a certain distance / turn a certain angle...) and, when asked, spits out values it thinks the robot should set its motors to to achieve that goal.
  252.  
  253. The are a lot of various controllers to choose from that differ in many ways such as:
  254. \begin{itemize}
  255. \item \textbf{Accuracy} - how accurate is the controller in getting the robot where it needs to be? How error-prone is it to unexpected situations (hitting a bump on the road, for ex.).
  256. \item \textbf{Input} - what of information does the controller needs to function properly (and accurately)?
  257. \item \textbf{Complexity} - how difficult is it to implement/configure said controller?
  258. \end{itemize}
  259. There is a whole field of study called \href{https://en.wikipedia.org/wiki/Control_theory}{control theory} that examines the controllers much more comprehensively than we can in a few short articles. That's why we're only going to talk about a select few.
  260.  
  261.  
  262.  
  263.  
  264.  
  265. \subsection{Sample Controller Class}
  266. Let's look at how classes of each of the controllers described in the sections above will look like:
  267.  
  268. \begin{lstlisting}
  269. class SampleControllerClass:
  270. """Description of the class."""
  271.  
  272. def __init__(self, ...):
  273. """Creates a controller object."""
  274. pass
  275.  
  276.  
  277. def set_goal(self, goal):
  278. """Sets the goal of the controller."""
  279. pass
  280.  
  281.  
  282. def get_value(self):
  283. """Returns the current controller value"""
  284. pass
  285. \end{lstlisting}
  286.  
  287. Let's break it down function by function:
  288. \begin{itemize}
  289. \item \texttt{\_\_init\_\_} is called when we want to create the controller object. In the actual controller implementations, \texttt{...} will be replaced by the parameters that the controller takes.
  290. \item \texttt{set\_goal} is called to set the controller's goal, where goal has to be a number. Note that we will need to call this function before we try to get a value from the controller, or things will break. This makes logical sense, because the controller can't really help you to reach the goal if you haven't specified the goal.
  291. \item \texttt{get\_value} will return the value that the controller thinks we should set the motors to to achieve our goal. \textbf{All controllers will return a value between -1 and 1} (including 1 and -1).
  292. \end{itemize}
  293. All of this will make more sense as we go through each of the controllers.
  294.  
  295.  
  296.  
  297.  
  298.  
  299. \subsection{Dead reckoning (\href{https://en.wikipedia.org/wiki/Dead_reckoning}{wiki})}
  300. One of the simplest ways of controlling the robot autonomously is using dead reckoning.
  301.  
  302. It uses one of the first equations you learned in physics: $time = distance / velocity$. We use it to calculate, how long it takes the robot to go a certain distance based on its average speed.
  303.  
  304. Let's look at an example: say our robot drives an average of $v = 2.5 \frac{m}{s}$. We want it to drive a distance of $d = 10m$. To calculate, how long it will take the robot, all you have to do is divide distance by velocity: $t = d/v = 10/2.5 = 4s$.
  305.  
  306. This is exactly what dead reckoning does - it calculates the time it will take the robot to drive the distance to the goal. When asked, returns 1 if the time hasn't elapsed yet and 0 if it has.
  307.  
  308.  
  309. \subsubsection{Implementation}
  310. There are two things that the controller needs: the average speed of the robot and a way to measure how much time had passed. Taking this into consideration, this is how a class implementing dead reckoning could look like:
  311.  
  312. \begin{lstlisting}
  313. class DeadReckoning:
  314. """A class implementing a dead reckoning controller."""
  315.  
  316. def __init__(self, speed, get_current_time):
  317. """Takes the average speed and a function that returns current time."""
  318. self.get_current_time = get_current_time
  319. self.speed = speed
  320.  
  321.  
  322. def set_goal(self, goal):
  323. """Sets the goal of the controller (and also starts the controller)."""
  324. self.goal = goal
  325. self.start_time = self.get_current_time()
  326.  
  327.  
  328. def get_value(self):
  329. """Return the current value the controller is returning."""
  330. # at what time should we reach the destination (d=d_0 + s/v)
  331. arrival_time = self.start_time + (self.goal / self.speed)
  332.  
  333. # return +-1 if we should have reached the destination and 0 if not
  334. if self.get_current_time() < arrival_time:
  335. return 1 if self.goal > 0 else -1
  336. else:
  337. return 0
  338. \end{lstlisting}
  339.  
  340. As we see, the parameters the \texttt{\_\_init\_\_} function is expecting to get are:
  341. \begin{itemize}
  342. \item \texttt{speed} - a number describing how fast the robot drives (on average) in units per second.
  343. \item \texttt{get\_current\_time} - a function returning the current time (used to measure, whether we should have arrived or not).
  344. \end{itemize}
  345.  
  346. \subsubsection{Example}
  347. Let's implement the example mentioned in Introduction and make the robot drive 10 meters:
  348.  
  349. \begin{lstlisting}
  350. # create robot's motors
  351. left_motor = Motor(1)
  352. right_motor = Motor(2)
  353.  
  354. # create the controller object and set its goal
  355. controller = DeadReckoning(2.5, get_current_time)
  356. controller.set_goal(10)
  357.  
  358. # while the controller is telling us to drive forward, drive forward
  359. while controller.get_value() == 1:
  360. tank_drive(1, 1, left_motor, right_motor)
  361. \end{lstlisting}
  362.  
  363. Notice how we used our previously implemented \texttt{tank\_drive} function to set both motors to drive forward at a maximum speed. We could have just written \texttt{left\_motor(1)} and \texttt{right\_motor(1)}, but this is a cleaner way of writing it.
  364.  
  365.  
  366. \subsubsection{Closing remarks}
  367. Although this is quite a simple controller to implement, you might realize that it is neither accurate nor practical. If the robot hits a bump on the road or slips on a banana peel, there is nothing it can do to correct the error (since it doesn't know where it is and where it's going).
  368.  
  369. Another important thing to keep in mind when using this controller is that if you want to change how fast the robot is driving/turning, you will need to re-calculate the speed of the robot to match said speed, which is very tedious.
  370.  
  371. We'll be focusing on improving accuracy in the next few upcoming chapters by incorporating real-time data from the robot in our controllers.
  372.  
  373.  
  374.  
  375.  
  376.  
  377. \subsection{Bang–bang (\href{https://en.wikipedia.org/wiki/Bang\%E2\%80\%93bang_control}{wiki})}
  378. Although our previous controller was quite easy to implement and use, there is no way for it to know whether it reached the target or not. It pretty much just turns the motors on for a while and hopes for the best.
  379.  
  380. Bang-bang aims to fix this problem by using \href{https://en.wikipedia.org/wiki/Feedback}{\texttt{feedback}} from our robot. Feedback could be values from its \href{https://en.wikipedia.org/wiki/Encoder}{encoders} (to measure how far it has gone), \href{https://en.wikipedia.org/wiki/Gyroscope}{gyro} (to measure where it's heading), or really anything else that we want to control. The important thing is that the data are \textbf{real-time} - the robot constantly gives feedback about what is happening to the controller, so the controller can act accordingly.
  381.  
  382. Bang-bang is the very first idea that comes to mind when we have real-time data. The controller will return 1 if we haven't passed the goal yet and 0 if we have.
  383.  
  384.  
  385. \subsubsection{Implementation}
  386. The only thing the Bang-bang controller needs is the feedback function returning information about the state of whatever we're trying to control.
  387.  
  388. \begin{lstlisting}
  389. class Bangbang:
  390. """A class implementing a bang-bang controller."""
  391.  
  392. def __init__(self, get_feedback_value):
  393. """Create the bang-bang controller object from the feedback function."""
  394. self.get_feedback_value = get_feedback_value
  395.  
  396.  
  397. def set_goal(self, goal):
  398. """Sets the goal of the bang-bang controller."""
  399. self.goal = goal
  400.  
  401.  
  402. def get_value(self):
  403. """Returns +1 or -1 (depending on the value of the goal) when the robot
  404. should be driving and 0 when it reaches the destination."""
  405. if self.goal > 0:
  406. if self.get_feedback_value() < self.goal:
  407. return 1 # goal not reached and is positive -> drive forward
  408. else:
  409. if self.get_feedback_value() > self.goal:
  410. return -1 # goal not reached and is negative -> drive backward
  411.  
  412. # if it shouldn't be driving neither forward nor backward
  413. return 0
  414. \end{lstlisting}
  415.  
  416. \subsubsection{Examples}
  417. For this example, we need an \texttt{Encoder} class to measure how far the robot has driven. The objects of this class will return the average of the distance driven by the left wheel and by the right wheel. Her is how a program that drives the robot 10 meters will look like:
  418.  
  419. \begin{lstlisting}
  420. # create robot's motors and the encoder
  421. left_motor = Motor(1)
  422. right_motor = Motor(2)
  423. encoder = Encoder()
  424.  
  425. # create the controller object and set its goal
  426. controller = Bangbang(encoder)
  427. controller.set_goal(10)
  428.  
  429. # while the controller is telling us to drive forward, drive forward
  430. while controller.get_value() == 1:
  431. tank_drive(1, 1, left_motor, right_motor)
  432. \end{lstlisting}
  433.  
  434. Notice that pretty much nothing changed between this and the dead reckoning example. This is the main advantage of all of the controllers having the same functions - we can use controller objects almost interchangeably, allowing us to easily try out and compare the accuracies of each of the controllers, without messing with the rest of our code.
  435.  
  436.  
  437. \subsubsection{Closing remarks}
  438. This is already markedly better than our previous dead reckoning approach, but it is still relatively inaccurate: the robot's inertia will make the robot drive a little extra distance when the controller tells it that it shouldn't be driving anymore, which means it will overshoot.
  439.  
  440. We could try to fix this by saying that it should start driving backward once it passed the goal, but the only thing you'd get is a robot that drives back and forth across the goal (which may be amusing, but not very helpful).
  441.  
  442. In the next chapters, we will try to improve our approach and create controllers that don't just return 1 for driving and 0 for not driving, but also values in-between (when the robot should be driving slower and when faster).
  443.  
  444.  
  445.  
  446.  
  447.  
  448. \subsection{PID (\href{https://en.wikipedia.org/wiki/PID_controller}{wiki})}
  449. Our previous attempt at creating a controller that used feedback from the robot could be further improved by considering how the \textbf{error} (difference between feedback value and the goal) changes over time.
  450.  
  451. Since PID is an abbreviation, let's talk about that the terms $P$, $I$ and $D$ mean:
  452. \begin{itemize}
  453. \item $P$ stands for \textbf{proportional} - how large is the error now (in the \textbf{present}).
  454. \item $I$ stands for \textbf{integral} - how large the error (accumulatively) was in the \textbf{past.}
  455. \item $D$ stands for \textbf{derivative} - what will the error likely be in the \textbf{future.}
  456. \end{itemize}
  457. The controller takes into account what happened, what is happening, and what will likely happen and continuously calculates each of the terms as the error changes:
  458.  
  459. \begin{figure}
  460. \centering
  461. \includegraphics[width=0.8\textwidth]{../assets/images/motor-controllers/pid.png}
  462. \caption{PID}
  463. \end{figure}
  464.  
  465.  
  466. \href{https://upload.wikimedia.org/wikipedia/commons/4/40/Pid-feedback-nct-int-correct.png}{PID image source}
  467.  
  468.  
  469.  
  470. \subsubsection{Implementation}
  471. The controller will need $p$, $i$ and $d$ constants to know, how important each of the aforementioned parts (proportional, integral, derivative) are. It will also need a feedback function and, to correctly calculate the integral and derivative, a function that returns the current time:
  472.  
  473. \begin{lstlisting}
  474. class PID:
  475. """A class implementing a PID controller."""
  476.  
  477. def __init__(self, p, i, d, get_current_time, get_feedback_value):
  478. """Initialises PID controller object from P, I, D constants, a function
  479. that returns current time and the feedback function."""
  480. # p, i, and d constants
  481. self.p, self.i, self.d = p, i, d
  482.  
  483. # saves the functions that return the time and the feedback
  484. self.get_current_time = get_current_time
  485. self.get_feedback_value = get_feedback_value
  486.  
  487.  
  488. def reset(self):
  489. """Resets/creates variables for calculating the PID values."""
  490. # reset PID values
  491. self.proportional, self.integral, self.derivative = 0, 0, 0
  492.  
  493. # reset previous time and error variables
  494. self.previous_time, self.previous_error = 0, 0
  495.  
  496.  
  497. def get_value(self):
  498. """Calculates and returns the PID value."""
  499. # calculate the error (how far off the goal are we)
  500. error = self.goal - self.get_feedback_value()
  501.  
  502. # get current time
  503. time = self.get_current_time()
  504.  
  505. # time and error differences to the previous get_value call
  506. delta_time = time - self.previous_time
  507. delta_error = error - self.previous_error
  508.  
  509. # calculate proportional (just error times the p constant)
  510. self.proportional = self.p * error
  511.  
  512. # calculate integral (error accumulated over time times the constant)
  513. self.integral += error \textit{ delta_time } self.i
  514.  
  515. # calculate derivative (rate of change of the error)
  516. # for the rate of change, delta_time can't be 0 (divison by zero...)
  517. self.derivative = 0
  518. if delta_time > 0:
  519. self.derivative = delta_error / delta_time * self.d
  520.  
  521. # update previous error and previous time values to the current values
  522. self.previous_time, self.previous_error = time, error
  523.  
  524. # add past, present and future
  525. pid = self.proportional + self.integral + self.derivative
  526.  
  527. # return pid adjusted to values from -1 to +1
  528. return 1 if pid > 1 else -1 if pid < -1 else pid
  529.  
  530.  
  531. def set_goal(self, goal):
  532. """Sets the goal and resets the controller variables."""
  533. self.goal = goal
  534. self.reset()
  535. \end{lstlisting}
  536.  
  537. To fully understand how the controller works, I suggest you closely examine the \texttt{get\_value()} function - that's where all the computation happens.
  538.  
  539. Notice a new function called \texttt{reset}, that we haven't seen in any of the other controllers. It is called every time we set the goal, because the controller accumulates error over time in the \texttt{self.integral} variable, and it would therefore take longer to adjust to the new goal.
  540.  
  541. It doesn't change the versatility of the controller classes, because we don't need to call it in order for the controller to function properly, it's just a useful function to have if we want to call it manually.
  542.  
  543.  
  544. \subsubsection{Tuning the controller}
  545. PID is the first discussed controller that needs to be tuned correctly to perform well, because if you set the constants to the wrong values, the controller will perform \href{https://www.youtube.com/watch?v=MxALJU_hp34}{poorly}.
  546.  
  547. There is a \href{https://en.wikipedia.org/wiki/PID_controller#Loop_tuning}{whole section} on Wikipedia about PID tuning. We won't go into details (read through the Wikipedia article if you're interested), but it is just something to keep in mind when using PID.
  548.  
  549.  
  550. \subsubsection{Examples}
  551.  
  552. \subsubsubsection{Driving a distance}
  553. Here is an example that makes the robot drive 10 meters forward. The constants are values that I used on the VEX EDR robot that I built to test the PID code, you will likely have to use different ones:
  554.  
  555. \begin{lstlisting}
  556. # create robot's motors, gyro and the encoder
  557. left_motor = Motor(1)
  558. right_motor = Motor(2)
  559. encoder = Encoder()
  560.  
  561. # create the PID controller (with encoder being the feedback function)
  562. controller = PID(0.07, 0.001, 0.002, time, encoder)
  563. controller.set_goal(10)
  564.  
  565. while True:
  566. # get the speed from the controller and apply it using tank drive
  567. value = controller.get_value()
  568. tank_drive(value, value, left_motor, right_motor)
  569. \end{lstlisting}
  570.  
  571.  
  572. \subsubsubsection{Auto-correct heading}
  573. Auto-correcting the heading of a robot is something PID is great for. What we want is to program the robot so that if something (like an evil human) pushes it, the robot adjusts itself to head the way it was heading before the push.
  574.  
  575. We could either use values from the encoders on the left and the right side to calculate the angle, but a more elegant (and accurate) solution is to use a gyro. Let's therefore assume that we have a \texttt{Gyro} class whose objects give us the current heading of the robot.
  576.  
  577. One thing we have to think about is what to set the motors to when we get the value from the controller, because to turn the robot, both of the motors will be going in opposite directions. Luckily, \texttt{arcade\_drive} is our savior: we can plug our PID values directly into the turning part of arcade drive (the \texttt{x} axis) to steer the robot. Refer back to the \href{{{site.baseurl}}drivetrain-control/arcade-drive/}{Arcade Drive article}, if you are unsure as to how/why this works.
  578.  
  579. \begin{lstlisting}
  580. # create robot's motors and the gyro
  581. left_motor = Motor(1)
  582. right_motor = Motor(2)
  583. gyro = Gyro()
  584.  
  585. # create the PID controller with gyro being the feedback function
  586. controller = PID(0.2, 0.002, 0.015, time, gyro)
  587. controller.set_goal(0) # the goal is 0 - we want the heading to be 0
  588.  
  589. while True:
  590. # get the value from the controller
  591. value = controller.get_value()
  592.  
  593. # set the turning component of arcade drive to the controller value
  594. arcade_drive(controller.get_value(), 0, left_motor, right_motor)
  595. \end{lstlisting}
  596.  
  597.  
  598. \subsubsubsection{Two controller combination}
  599. What's even nicer is that we can combine the two examples that we just implemented into ONE - a robot that drives forward and corrects itself when it isn't heading the right way.
  600.  
  601. We will create two controllers - one for driving straight by a certain distance and one for turning to correct possible heading errors.
  602.  
  603. Arcade drive will again be our dear friend, since we can plug values from the controller that controls driving directly into the driving part of arcade drive, and the controller that controls heading directly into the turning part of arcade drive:
  604.  
  605. \begin{lstlisting}
  606. # create robot's motors, gyro and the encoder
  607. left_motor = Motor(1)
  608. right_motor = Motor(2)
  609. gyro = Gyro()
  610. encoder = Encoder()
  611.  
  612. # create separate controllers for turning and driving
  613. drive_controller = PID(0.07, 0.001, 0.002, time, encoder)
  614. turn_controller = PID(0.2, 0.002, 0.015, time, gyro)
  615.  
  616. # we want to stay at the 0° angle and drive 10 meters at the same time
  617. turn_controller.set_goal(0)
  618. drive_controller.set_goal(10)
  619.  
  620. while True:
  621. # get the values from both controllers
  622. turn_value = turn_controller.get_value()
  623. drive_value = drive_controller.get_value()
  624.  
  625. # drive/turn using arcade drive
  626. arcade_drive(turn_value, drive_value, left_motor, right_motor)
  627. \end{lstlisting}
  628.  
  629.  
  630. \subsubsection{Closing remarks}
  631. PID is one of the most widely used controllers not just in robotics, but in many industries (controlling a boiler/thermostat) because it is reliable, relatively easy to implement and quite precise for most use cases.
  632.  
  633. For motivation, here is a \href{https://www.youtube.com/watch?v=4Y7zG48uHRo}{great video} demonstrating the power of a correctly configured PID controller.
  634.  
  635.  
  636.  
  637.  
  638.  
  639. \subsection{Polynomial Function}
  640. Another way that we can get values that aren't just 1's and 0's is to model a function from points and get the speed of the robot from it - for example: let's say that we start at speed 0.2, drive at full speed when we're at half the distance and slow down to 0 when we're at the end.
  641.  
  642. Polynomial function is a great candidate for this task. We can pick points that we want the function to pass through and then use \href{https://en.wikipedia.org/wiki/Polynomial_regression}{polynomial regression} to get the coefficients of the function. \href{https://mycurvefit.com/}{MyCurveFit.com} is a great website to use for this exact purpose. Here is how a modeled polynomial function could look like:
  643.  
  644. \begin{figure}
  645. \centering
  646. \includegraphics[width=0.8\textwidth]{../assets/images/motor-controllers/polynomial-function.png}
  647. \caption{Polynomial function}
  648. \end{figure}
  649.  
  650.  
  651. As you can see, it returns all sorts of values from 0 to 1.
  652.  
  653. One thing you should also notice is that the function starts at $x = 0$ and ends at $x = 1$. This is deliberate - it makes it easy for us to "stretch" the function a little wider if we want to drive some other distance, not just 1 meter.
  654.  
  655.  
  656. \subsubsection{Horner's method (\href{https://en.wikipedia.org/wiki/Horner\%27s_method}{wiki})}
  657. When it comes to programming, exponentiation tends to be quite imprecise and slow. Horner's method is a neat solution to this problem. The concept is simple - algebraically change the expression so there is no exponentiation:
  658.  
  659. $$\large 2x^3 + 4x^2 -x + 5 \quad \rightarrow \quad x(x(x(2) + 4) - 1) + 5$$
  660.  
  661. This trick can be performed on a polynomial of any size, this is just an example of how the method works.
  662.  
  663.  
  664. \subsubsection{Implementation}
  665. The controller only needs the coefficients of the polynomial that we modeled, and a feedback function. Here is how the implementation would look like with Horner's method:
  666.  
  667. \begin{lstlisting}
  668. class PolynomialFunction:
  669. """A class implementing a polynomial function controller."""
  670.  
  671. def __init__(self, coefficients, get_feedback_value):
  672. """Initialises the polynomial function controller from the polynomial
  673. coefficients and the feedback value."""
  674. self.coefficients = coefficients # the coefficients of the function
  675. self.get_feedback_value = get_feedback_value # the feedback function
  676.  
  677.  
  678. def get_value(self):
  679. """Returns the polynomial function value at feedback function value."""
  680. # calculate the x coordinate (by "stretching" the function by goal)
  681. x = self.get_feedback_value() / abs(self.goal)
  682.  
  683. # calculate function value using Horner's method
  684. value = self.coefficients[0]
  685. for i in range(1, len(self.coefficients)):
  686. value = x * value + self.coefficients[i]
  687.  
  688. # if the value is over 1, set it to 1
  689. if value > 1:
  690. value = 1
  691.  
  692. # if goal is negative, function value is negative
  693. return value if self.goal > 0 else -value
  694.  
  695.  
  696. def set_goal(self, goal):
  697. """Sets the goal of the controller."""
  698. self.goal = goal
  699. \end{lstlisting}
  700.  
  701.  
  702. \subsubsection{Examples}
  703.  
  704. \subsubsubsection{Driving a distance}
  705. Once again, the code is almost the exact same as the examples from nearly all of the other controllers, the only difference is that a \texttt{PolynomialFunction} controller takes a list of coefficients of the polynomial to calculate the controller value, compared to the inputs of other controllers:
  706.  
  707. \begin{lstlisting}
  708. # create robot's motors, gyro and the encoder
  709. left_motor = Motor(1)
  710. right_motor = Motor(2)
  711. encoder = Encoder()
  712.  
  713. # create the controller (with encoder as the feedback function)
  714. controller = PolynomialFunction([-15.69, 30.56, -21.97, 6.91, 0.2], encoder)
  715. controller.set_goal(10)
  716.  
  717. while True:
  718. # get the speed from the controller and apply it using tank drive
  719. value = controller.get_value()
  720. tank_drive(value, value, left_motor, right_motor)
  721. \end{lstlisting}
  722.  
  723.  
  724. \subsubsection{Generating a polynomial}
  725. An alternative to "stretching" the polynomial to fit the goal is to specify the points the polynomial passes through and generate the coefficients \textit{after} the goal is specified.
  726.  
  727. Say you have points $\left(0,\ 0.2\right)$, $\left(0.4,\ 1\right)$, $\left(0.6,\ 1\right)$ and $\left(1,0\right)$. Since there are 4 points, the general form of the polynomial is $y=ax^3+bx^2+cx+d$ Using this information, we can create a system of linear equations:
  728.  
  729. $$
  730. \begin{align*}
  731. 0.2&=a(0)^3+b(0)^2+c(0)+d \\
  732. 1&=a(0.4)^3+b(0.4)^2+c(0.4)+d \\
  733. 1&=a(0.6)^3+b(0.6)^2+c(0.6)+d \\
  734. 0&=a(1)^3+b(1)^2+c(1)+d
  735. \end{align*}
  736. $$
  737.  
  738. Solving this system of linear equations will give us the coefficients of the polynomial. We can apply this method to a polynomial of any degree, given enough points (if $d$ is the degree of the polynomial and $n$ is the number of unique points, the degree of the polynomial the points form is $d=n-1$).
  739.  
  740.  
  741. \subsubsection{Closing remarks}
  742. Although this controller isn't as widely used as PID, it can frequently perform better than PID, namely in situations where the ranges of movement of the motors are restricted - forks of a forklift/robot arm.
  743.  
  744.  
  745.  
  746.  
  747.  
  748. \section{Odometry}
  749.  
  750.  
  751. Tracking the position of the robot.
  752.  
  753.  
  754.  
  755. Odometry is the use of data from sensors of the robot to estimate its position over time.
  756.  
  757. This means that we can calculate where the robot is at the moment, by getting information from sensors such as an encoder, a gyro, or a camera, and performing calculations using said information.
  758.  
  759.  
  760.  
  761.  
  762.  
  763. \subsection{Sensor Values}
  764. There are two things we need to know to perform the approximation:
  765. \begin{itemize}
  766. \item \textbf{Distance $Δ$} - how much did we move by?
  767. \item \textbf{Heading $Δ$} - which way are we heading?
  768. \end{itemize}
  769. Assuming we have encoders on both sides of the robot, the distance is quite easy to calculate: we can read the values the encoders on both of the sides are reading and average them. Assuming we also have a gyro, heading is quite easy too: we can get the heading directly as the values the gyro is returning.
  770.  
  771. But what if we didn't have a gyro?
  772.  
  773.  
  774. \subsubsection{Calculate heading without gyro}
  775. Although gyro is arguably the most precise way to measure the current heading, it's not always available. It might be too expensive, impractical to include on a small robot, or not used because of other conditions that deem it unusable. In cases like these, it is good to know how to calculate heading only from readings of the encoders.
  776.  
  777. Say the robot drove a small arc. The left encoder measured a distance $l$ and the right side measured a distance $r$. The length between the two wheels is $c$, the angle by which we turned is $ω$ (measured in radians), and $x$ is just a variable to help with our calculations. Here is an illustration:
  778.  
  779. \begin{figure}
  780. \centering
  781. \includegraphics[width=0.8\textwidth]{../assets/images/odometry/heading-from-encoders.png}
  782. \caption{Heading from encoders}
  783. \end{figure}
  784.  
  785.  
  786. From this diagram, we can derive equations for the lengths of the arches $l$ and $r$ (here is an article about \href{https://www.mathopenref.com/arclength.html}{arc length}, if you need further clarification):
  787.  
  788. $$\large l=x\cdot\omega\qquad r=\left(c+x\right)\cdot\omega$$
  789.  
  790. We can then combine the equations, simplify, and solve for $ω$:
  791.  
  792. $$\large \frac{l}{\omega} = \frac{r}{\omega} - c$$
  793.  
  794. $$\large \omega = \frac{r - l}{c}$$
  795.  
  796. And that's it! The angle can be calculated from the difference of the readings of the encoders, divided by the length of the axis.
  797.  
  798.  
  799.  
  800.  
  801.  
  802. \subsection{Line Approximation}
  803. For our first approximation, let's make an assumption that will simplify our equations: instead of an arc, the robot will first turn to the specified angle, and only then drive the distance in a straight line.
  804.  
  805. This is quite a reasonable assumption to make for smaller angles, and since we are going to be updating the position multiple times per second, the angles aren't going to be as drastic as the picture portrays:
  806.  
  807. \begin{figure}
  808. \centering
  809. \includegraphics[width=0.8\textwidth]{../assets/images/odometry/line-approximation.png}
  810. \caption{Line Approximation}
  811. \end{figure}
  812.  
  813.  
  814. The robot just moved a distance $d$. It was previously at an angle $θ$ and is now at an angle $θ + ω$. We want to calculate, what the new position of the robot is after this move.
  815.  
  816.  
  817. \subsubsection{Deriving the equations}
  818. One of the ways to do this is to imagine a right triangle with $d$ being hypotenuse. We will use \href{https://www2.clarku.edu/faculty/djoyce/trig/formulas.html}{trigonometric formulas} and solve for $Δx$ and $Δy$:
  819.  
  820. $$\large sin(\theta + \omega)=\frac{\Delta y}{d} \qquad cos(\theta + \omega)=\frac{\Delta x}{d}$$
  821.  
  822. $$\large \Delta y = d \cdot sin(\theta + \omega) \qquad \Delta x = d \cdot cos(\theta + \omega)$$
  823.  
  824. The resulting coordinates $(x,y)$ are $x=x_0+Δx$ and $y=y_0+Δy$.
  825.  
  826.  
  827. \subsubsection{Implementation}
  828. Here is how one could go about implementing a class that tracks the current position of the robot by getting information from both encoders using the aforementioned line approximation methods:
  829.  
  830. \begin{lstlisting}
  831. from math import cos, sin
  832.  
  833. class LineApproximation:
  834. """A class to track the position of the robot in a system of coordinates
  835. using only encoders as feedback, using the line approximation method."""
  836.  
  837. def __init__(self, axis_width, l_encoder, r_encoder):
  838. """Saves input values, initializes class variables."""
  839. self.axis_width = axis_width
  840. self.l_encoder, self.r_encoder = l_encoder, r_encoder
  841.  
  842. # previous values for the encoder position and heading
  843. self.prev_l, self.prev_r, self.prev_heading = 0, 0, 0
  844.  
  845. # starting position of the robot
  846. self.x, self.y = 0, 0
  847.  
  848.  
  849. def update(self):
  850. """Update the position of the robot."""
  851. # get sensor values and the previous heading
  852. l, r, heading = self.l_encoder(), self.r_encoder(), self.prev_heading
  853.  
  854. # calculate encoder deltas (differences from this and previous readings)
  855. l_delta, r_delta = l - self.prev_l, r - self.prev_r
  856.  
  857. # calculate ω
  858. h_delta = (r_delta - l_delta) / self.axis_width
  859.  
  860. # approximate the position using the line approximation method
  861. self.x += l_delta * cos(heading + h_delta)
  862. self.y += r_delta * sin(heading + h_delta)
  863.  
  864. # set previous values to current values
  865. self.prev_l, self.prev_r, self.prev_heading = l, r, heading + h_delta
  866.  
  867.  
  868. def get_position(self):
  869. """Return the position of the robot."""
  870. return (self.x, self.y)
  871. \end{lstlisting}
  872.  
  873. Note that for the position estimation to be accurate, the \texttt{update()} function of the class needs to be called multiple times per second.
  874.  
  875.  
  876.  
  877.  
  878.  
  879. \subsection{Circle Approximation}
  880. Although the approximation with the assumption that the robot first turns and only then drives can be decently accurate, we can make it more precise by assuming that the robot drives in an arc (which, in reality, is quite close to what the robot is really doing), as seen on the picture bellow:
  881.  
  882. \begin{figure}
  883. \centering
  884. \includegraphics[width=0.8\textwidth]{../assets/images/odometry/circle-approximation.png}
  885. \caption{Circle Approximation}
  886. \end{figure}
  887.  
  888.  
  889. The robot just moved in an arch, the left encoder measured a distance $l$ and the right wheel a distance $r$. It was previously at an angle $θ$ and is now at an angle $θ + ω$. We want to calculate, what the new position of the robot is after this move.
  890.  
  891. \subsubsection{Deriving the equations}
  892. Deriving equations for this model will be a little more difficult than the previous one, but they should still be relatively simple to follow.
  893.  
  894. The way to calculate the new coordinates is to find the radius $R$ of \href{https://en.wikipedia.org/wiki/Instant_centre_of_rotation}{ICC} (\textit{Instantaneous Center of Curvature} - the point around which the robot is turning) and then rotate $x_0$ and $y_0$ around it.
  895.  
  896.  
  897. \subsubsubsection{Calculating R}
  898. Let's start by finding the formula for calculating $R$. We will derive it from the formulas for calculating $l$ and $r$:
  899.  
  900. $$\large l = \omega \cdot \left(R - \frac{c}{2}\right) \qquad \large r = \omega \cdot \left(R + \frac{c}{2}\right)$$
  901.  
  902. Combining the equations and solving for R gives us:
  903.  
  904. $$\large R = \left(\frac{r+l}{\omega \cdot 2} \right)$$
  905.  
  906. From our previous article about \href{{{site.baseurl}}odometry/sensor-values/}{Sensor Values}, we know that $\omega = \frac{r - l}{c}$. If we don't have a gyro, we can just plug that into our newly derived formula and get:
  907.  
  908. $$\large R = \frac{r+l}{\frac{r - l}{c} \cdot 2} = \frac{r+l}{r - l} \cdot \frac{c}{2}$$
  909.  
  910.  
  911. \subsubsubsection{Rotating $(x_0, y_0)$ around ICC}
  912. For this section, we will assume that you know how to rotate a point around the origin by a certain angle. If not, here is a \href{https://www.khanacademy.org/partner-content/pixar/sets/rotation/v/sets-8}{video} from Khan Academy deriving the equations for rotating a point around the origin.
  913.  
  914. First of, we will need the coordinates of the $ICC$. Since it's perpendicular to the left of the robot, we will use the same \href{https://stackoverflow.com/questions/4780119/2d-euclidean-vector-rotations}{trick} as rotating a vector by 90 degrees counter-clockwise: switch $x_0$ and $y_0$ and negate the first coordinate:
  915.  
  916. $$\large ICC_x=x_0-R \; sin(\theta) \qquad ICC_y=y_0+R \cdot cos(\theta)$$
  917.  
  918. To rotate $(x_0, y_0)$ around ICC (and therefore find $(x, y)$), we will first translate ICC to the origin, then rotate, and then translate back:
  919.  
  920. $$\large x = (x_0 - ICC_x) \cdot cos(\omega) - (y_0 - ICC_y) \cdot sin(\omega) + ICC_x$$
  921.  
  922. $$\large y = (x_0 - ICC_x) \cdot sin(\omega) - (y_0 - ICC_y) \cdot cos(\omega) + ICC_y$$
  923.  
  924. After plugging in values for $ICC_x$, $ICC_y$ and simplifying, we get:
  925.  
  926. $$\large x = R \; sin(\theta) \cdot cos(\omega) + R \cdot cos(\theta) \cdot sin(\omega) + x_0 - R \; sin(\theta)$$
  927.  
  928. $$\large y = R \; sin(\theta) \cdot sin(\omega) - R \cdot cos(\theta) \cdot cos(\omega) + y_0 + R \; cos(\theta)$$
  929.  
  930. Using trigonometric rules, the equations can be further simplified to:
  931.  
  932. $$\large x = x_0 + R \; sin(\theta + \omega) - R \; sin(\theta)$$
  933.  
  934. $$\large y = y_0 - R \; cos(\theta + \omega) + R \; cos(\theta)$$
  935.  
  936.  
  937. \subsubsubsection{Edge cases}
  938. There is one noteworthy case of values $l$ and $r$ where our circle approximation method won't work.
  939.  
  940. If $r=l$ (if $\omega = 0$), then the radius cannot be calculated, because it would be "infinite". For this reason, our method can't be used on its own, because if the robot drove in a straight line, the code would crash.
  941.  
  942. We can, however, still employ or line approximation method, since in this case, it really is driving in a straight line! It is also a good idea to apply the line approximation method for very small angles, since it's almost like driving straight, and it's far less computation-heavy.
  943.  
  944.  
  945. \subsubsection{Implementation}
  946. Here is the implementation, combining both of the approximation methods:
  947.  
  948. \begin{lstlisting}
  949. from math import cos, sin
  950.  
  951. class CircleApproximation:
  952. """A class to track the position of the robot in a system of coordinates
  953. using only encoders as feedback, using a combination of line and circle
  954. approximation methods."""
  955.  
  956. def __init__(self, axis_width, l_encoder, r_encoder):
  957. """Saves input values, initializes class variables."""
  958. self.axis_width = axis_width
  959. self.l_encoder, self.r_encoder = l_encoder, r_encoder
  960.  
  961. # previous values for the encoder position and heading
  962. self.prev_l, self.prev_r, self.prev_heading = 0, 0, 0
  963.  
  964. # starting position of the robot
  965. self.x, self.y = 0, 0
  966.  
  967.  
  968. def update(self):
  969. """Update the position of the robot."""
  970. # get sensor values and the previous heading
  971. l, r, heading = self.l_encoder(), self.r_encoder(), self.prev_heading
  972.  
  973. # calculate encoder deltas (differences from this and previous readings)
  974. l_delta, r_delta = l - self.prev_l, r - self.prev_r
  975.  
  976. # calculate ω
  977. h_delta = (r_delta - l_delta) / self.axis_width
  978.  
  979. # either approximate if we're going (almost) straight or calculate arc
  980. if abs(l_delta - r_delta) < 1e-5:
  981. self.x += l_delta * cos(heading)
  982. self.y += r_delta * sin(heading)
  983. else:
  984. # calculate the radius of ICC
  985. R = (self.axis_width / 2) * (r_delta + l_delta) / (r_delta - l_delta)
  986.  
  987. # calculate the robot position by finding a point that is rotated
  988. # around ICC by heading delta
  989. self.x += R \textit{ sin(h_delta + heading) - R } sin(heading)
  990. self.y += - R \textit{ cos(h_delta + heading) + R } cos(heading)
  991.  
  992. # set previous values to current values
  993. self.prev_l, self.prev_r, self.prev_heading = l, r, heading + h_delta
  994.  
  995.  
  996. def get_position(self):
  997. """Return the position of the robot."""
  998. return (self.x, self.y)
  999. \end{lstlisting}
  1000.  
  1001.  
  1002.  
  1003.  
  1004.  
  1005. \section{Resources}
  1006.  
  1007.  
  1008. Links to resources either directly used by the website (such as libraries), or that helped me understand the concepts mentioned in the articles.
  1009.  
  1010.  
  1011. \subsection{Math}
  1012. The website uses \href{https://www.mathjax.org/}{MathJax} to render Latex equations. It previously used \href{http://mathurl.com/}{mathURL}, but it was way more effort to include them, and they weren't interactive.
  1013.  
  1014. \subsection{Images}
  1015. The images used to illustrate the concepts on this website are modified using \href{https://inkscape.org/cs/}{Inkscape} (free vector graphics editor) and \href{https://www.gimp.org/}{GIMP} (free bitmap graphics editor). CAD model images are generated and adjusted using \href{https://www.autodesk.com/products/fusion-360/students-teachers-educators}{Fusion 360} (free CAD design software).
  1016.  
  1017. \subsection{p5.js}
  1018. Visualizations on the website are created using the \href{https://p5js.org/}{p5.js} library. This \href{(https://raw.githubusercontent.com/KevinWorkman/HappyCoding/gh-pages/examples/p5js/_posts/2018-07-04-fireworks.md}{example}) helped me understand how it worked with Jekyll. I use the \href{https://editor.p5js.org/}{p5js web editor} to edit the the visualizations before I put them on the website.
  1019.  
  1020. \subsection{VEX EDR}
  1021. To test the algorithms, I built a custom VEX EDR robot using \href{https://www.vexrobotics.com/276-3000.html}{this kit} that an educational center \href{http://www.vctu.cz/}{VCT} kindly lent me. The robot is programed in Python using \href{https://www.robotmesh.com/studio}{RobotMesh studio} (for more information, see the Python \href{https://www.robotmesh.com/docs/vexcortex-python/html/namespaces.html}{documentation}).
  1022.  
  1023. \subsection{Autonomous motion control}
  1024. \href{https://github.com/AtsushiSakai/PythonRobotics}{PythonRobotics} is a great repository containing implementations of various robotics algorithms in Python.
  1025.  
  1026. \subsection{PID}
  1027. I studied a PID Python \href{https://github.com/ivmech/ivPID}{implementation} before writing my own.
  1028.  
  1029. \subsection{Drivetrain Control}
  1030. There were a few helpful articles that helped me understand equations behind the more complex drivetrains:
  1031. \begin{itemize}
  1032. \item Arcade drive Chief Delphi forum \href{https://www.chiefdelphi.com/media/papers/2661}{post} by Ether.
  1033. \item Simplistic Control of Mecanum Drive \href{https://forums.parallax.com/discussion/download/79828/ControllingMecanumDrive\%255B1\%255D.pdf&sa=U&ved=0ahUKEwiX5LzFiNrfAhVswYsKHTofDrwQFggEMAA&client=internal-uds-cse&cx=002870150170079142498:hq1zjyfbawy&usg=AOvVaw19D74YD--M3YmQ2MGd1rTg}{paper}.
  1034. \item Swerve drive Chief Delphi forum \href{https://www.chiefdelphi.com/t/paper-4-wheel-independent-drive-independent-steering-swerve/107383}{post} by Ether.
  1035. \end{itemize}
  1036. \subsection{Circle Approximation}
  1037. There were two main resources that helped me put together the Circle Approximation article.
  1038. \begin{itemize}
  1039. \item Kinematics Equations for Differential Drive and Articulated Steering \href{http://www8.cs.umu.se/kurser/5DV122/HT13/material/Hellstrom-ForwardKinematics.pdf}{whitepaper}
  1040. \item Position Estimation \href{http://people.scs.carleton.ca/~lanthier/teaching/COMP4807/Notes/5\%20-\%20PositionEstimation.pdf}{presentation}
  1041. \end{itemize}
  1042.  
  1043.  
  1044.  
  1045.  
  1046. \section{About}
  1047.  
  1048.  
  1049. Additional information about the project.
  1050.  
  1051.  
  1052.  
  1053. \subsection{Motivation}
  1054. When I joined the FRC team \href{https://www.metalmoose.org/}{Metal Moose} and started learning about robotics, I didn't find many beginner-friendly resources for people like me. They were hard to find and scattered all around the web. One had to be quite persistent to actually learn something on their own.
  1055.  
  1056. That is the main reason for the creation of this website - to serve as a resource for people that want to learn the concepts of robotics, without having to go though all the trouble of finding quality resources.
  1057.  
  1058.  
  1059. \subsection{Contact information}
  1060. If you'd like to get in touch regarding anything, email me on \texttt{tomas.slama.131@gmail.com}. For more information, see my \href{http://t-slama.cz/}{personal website}.
  1061.  
  1062. \subsection{Acknowledgements}
  1063. I would like to thank the following people for their help in making this project a reality:
  1064. \begin{itemize}
  1065. \item \textbf{Kateřina Sulková} for being loving, supportive, and especially helpful in writing the SOČ paper.
  1066. \item \textbf{Matěj Halama} (\textit{matejhalama.cz}) for an insightful consultation about the website logo.
  1067. \item \textbf{Jan Hladík} for help in designing the logo of the website.
  1068. \item \textbf{VCT} for kindly lending me a VEX EDR kit.
  1069. \end{itemize}
  1070.  
  1071.  
  1072. \end{document}
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement