author,category,content,published_date,slug,summary,title,url ryan,technology,"Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was a bad idea. (Duh) I spent the better part of an hour trying to find a script that I had accidentally deleted somewhere in my blog. Turns out it was (mostly) there, but it didn’t ‘feel’ right … though I’m not sure why. I was able to restore the file from my blog archive, but I decided that was a dumb way to live and given that 1. I use version control at work (and have for the last 15 years) 2. I’ve used it for other personal projects However, I’ve only ever used a GUI version of either subversion (at work) or GitHub (for personal projects via PyCharm). I’ve never used it from the command line. And so, with a bit of time on my hands I dove in to see what needed to be done. Turns out, not much. I used this [GitHub](https://help.github.com/articles/adding-an-existing-project-to- github-using-the-command-line/) resource to get me what I needed. Only a couple of commands and I was in business. The problem is that I have a terrible memory and this isn’t something I’m going to do very often. So, I decided to write a bash script to encapsulate all of the commands and help me out a bit. The script looks like this: echo ""Enter your commit message:"" read commit_msg git commit -m ""$commit_msg"" git remote add origin path/to/repository git remote -v git push -u origin master git add $1 echo ”enter your commit message:” read commit_msg git commit -m ”$commit_msg” git push I just recently learned about user input in bash scripts and was really excited about the opportunity to be able to use it. Turns out it didn’t take long to try it out! (God I love learning things!) What the script does is commits the files that have been changed (all of them), adds it to the origin on the GitHub repo that has been specified, prints verbose logging to the screen (so I can tell what I’ve messed up if it happens) and then pushes the changes to the master. This script doesn’t allow you to specify what files to commit, nor does it allow for branching and tagging … but I don’t need those (yet). I added this script to 3 of my projects, each of which can be found in the following GitHub Repos: * [rpicamera-hummingbird](https://github.com/ryancheley/rpicamera-hummingbird) * [rpi-dodgers](https://github.com/ryancheley/rpi-dodgers) * [rpi-kings](https://github.com/ryancheley/rpi-kings) I had to make the commit.sh executable (with `chmod +x commit.sh`) but other than that it’s basically plug and play. ## Addendum I made a change to my Kings script tonight (Nov 27) and it wouldn’t get pushed to git. After a bit of Googling and playing around, I determined that the original script would only push changes to an empty repo ... not one with stuff, like I had. Changes made to the post (and the GitHub repo!) ",2018-11-25,adding-my-raspberry-pi-project-code-to-github,"Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was … ",Adding my Raspberry Pi Project code to GitHub,https://www.ryancheley.com/2018/11/25/adding-my-raspberry-pi-project-code-to-github/ ryan,technology,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin called [tipue-search](https://github.com/pelican- plugins/tipue-search) but when I was looking at it I could tell it was on it's last legs. I thought about it, and since my blag isn't super high trafficked AND you can use google to search a specific site, I could wait a bit and see what options came up. After waiting a few months, I decided it would be interesting to see if I could write a SQLite utility to get the data from my blog, add it to a SQLite database and then use [datasette](https://datasette.io) to serve it up. I wrote the beginning scaffolding for it last August in a utility called [pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/0.1/), but I ran into several technical issues I just couldn't overcome. I thought about giving up, but sometimes you just need to take a step away from a thing, right? After the first of the year I decided to revisit my idea, but first looked to see if there was anything new for Pelican search. I found a tool plugin called [search](https://github.com/pelican-plugins/search) that was released last November and is actively being developed, but as I read through the documentation there was just **A LOT** of stuff: * stork * requirements for the structure of your page html * static asset hosting * deployment requires updating your `nginx` settings These all looked a bit scary to me, and since I've done some work using [datasette](https://datasette.io) I thought I'd revisit my initial idea. ## My First Attempt As I mentioned above, I wrote the beginning scaffolding late last summer. In my first attempt I tried to use a few tools to read the `md` files and parse their `yaml` structure and it just didn't work out. I also realized that `Pelican` can have [reStructured Text](https://www.sphinx- doc.org/en/master/usage/restructuredtext/basics.html) and that any attempt to parse just the `md` file would never work for those file types. ## My Second Attempt ### The Plugin During the holiday I thought a bit about approaching the problem from a different perspective. My initial idea was to try and write a `datasette` style package to read the data from `pelican`. I decided instead to see if I could write a `pelican` plugin to get the data and then add it to a SQLite database. It turns out, I can, and it's not that hard. Pelican uses `signals` to make plugin in creation a pretty easy thing. I read a [post](https://blog.geographer.fr/pelican-plugins) and the [documentation](https://docs.getpelican.com/en/latest/plugins.html) and was able to start my effort to refactor `pelican-to-sqlite`. From [The missing Pelican plugins guide](https://blog.geographer.fr/pelican- plugins) I saw lots of different options, but realized that the signal `article_generator_write_article` is what I needed to get the article content that I needed. I then also used `sqlite_utils` to insert the data into a database table. def save_items(record: dict, table: str, db: sqlite_utils.Database) -> None: # pragma: no cover db[table].insert(record, pk=""slug"", alter=True, replace=True) Below is the method I wrote to take the content and turn it into a dictionary which can be used in the `save_items` method above. def create_record(content) -> dict: record = {} author = content.author.name category = content.category.name post_content = html2text.html2text(content.content) published_date = content.date.strftime(""%Y-%m-%d"") slug = content.slug summary = html2text.html2text(content.summary) title = content.title url = ""https://www.ryancheley.com/"" + content.url status = content.status if status == ""published"": record = { ""author"": author, ""category"": category, ""content"": post_content, ""published_date"": published_date, ""slug"": slug, ""summary"": summary, ""title"": title, ""url"": url, } return record Putting these together I get a method used by the Pelican Plugin system that will generate the data I need for the site AND insert it into a SQLite database def run(_, content): record = create_record(content) save_items(record, ""content"", db) def register(): signals.article_generator_write_article.connect(run) ### The html template update I use a custom implementation of [Smashing Magazine](https://www.smashingmagazine.com/2009/08/designing-a-html-5-layout- from-scratch/). This allows me to do some edits, though I mostly keep it pretty stock. However, this allowed me to make a small edit to the `base.html` template to include a search form. In order to add the search form I added the following code to `base.html` below the `nav` tag:
### Putting it all together with datasette and Vercel Here's where the **magic** starts. Publishing data to Vercel with `datasette` is extremely easy with the `datasette` plugin [`datasette-publish- vercel`](https://pypi.org/project/datasette-publish-vercel/). You do need to have the [Vercel cli installed](https://vercel.com/cli), but once you do, the steps for publishing your SQLite database is really well explained in the `datasette-publish-vercel` [documentation](https://github.com/simonw/datasette-publish- vercel/blob/main/README.md). One final step to do was to add a `MAKE` command so I could just type a quick command which would create my content, generate the SQLite database AND publish the SQLite database to Vercel. I added the below to my `Makefile`: vercel: { \ echo ""Generate content and database""; \ make html; \ echo ""Content generation complete""; \ echo ""Publish data to vercel""; \ datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \ echo ""Publishing complete""; \ } The line datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \ has an extra flag passed to it (`--metadata`) which allows me to use `metadata.json` to create a saved query which I call `article_search`. The contents of that saved query are: select summary as 'Summary', url as 'URL', published_date as 'Published Data' from content where content like '%' || :text || '%' order by published_date This is what allows the `action` in the `form` above to have a URL to link to in `datasette` and return data! With just a few tweaks I'm able to include a search tool, powered by datasette for my pelican blog. Needless to say, I'm pretty pumped. ## Next Steps There are still a few things to do: 1. separate search form html file (for my site) 2. formatting `datasette` to match site (for my vercel powered instance of `datasette`) 3. update the README for `pelican-to-sqlite` package to better explain how to fully implement 4. Get `pelican-to-sqlite` added to the [pelican-plugins page](https://github.com/pelican-plugins/) ",2022-01-16,adding-search-to-my-pelican-blog-with-datasette,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin … ",Adding Search to My Pelican Blog with Datasette,https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/ ryan,technology,"Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the code on `pythonista` on my iPad Pro and verify that it would run. It took some doing (mostly because I _forgot_ that the attributes in an `img` tag included what I needed ... initially I was trying to programmatically get the name of the person from the image file itelf using [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) ... it didn't work out well). Once that was done I branched the `master` on GitHub into a `development` branch and copied the changes there. Once that was done I performed a **pull request** on the macOS GitHub Desktop Application. Finally, I used the macOS GitHub app to merge my **pull request** from `development` into `master` and now have the changes. The updated script will now also get the image data to display into the multi markdown table: | Name | Title | Image | | --- | --- | --- | |Mike Cheley|CEO/Creative Director|![alt text](https://www.graphtek.com/user_images/Team/Mike_Cheley.png ""Mike Cheley"")| |Ozzy|Official Greeter|![alt text](https://www.graphtek.com/user_images/Team/Ozzy.png ""Ozzy"")| |Jay Sant|Vice President|![alt text](https://www.graphtek.com/user_images/Team/Jay_Sant.png ""Jay Sant"")| |Shawn Isaac|Vice President|![alt text](https://www.graphtek.com/user_images/Team/Shawn_Isaac.png ""Shawn Isaac"")| |Jason Gurzi|SEM Specialist|![alt text](https://www.graphtek.com/user_images/Team/Jason_Gurzi.png ""Jason Gurzi"")| |Yvonne Valles|Director of First Impressions|![alt text](https://www.graphtek.com/user_images/Team/Yvonne_Valles.png ""Yvonne Valles"")| |Ed Lowell|Senior Designer|![alt text](https://www.graphtek.com/user_images/Team/Ed_Lowell.png ""Ed Lowell"")| |Paul Hasas|User Interface Designer|![alt text](https://www.graphtek.com/user_images/Team/Paul_Hasas.png ""Paul Hasas"")| |Alan Schmidt|Senior Web Developer|![alt text](https://www.graphtek.com/user_images/Team/Alan_Schmidt.png ""Alan Schmidt"")| Which gets displayed as this: Name Title Image * * * Mike Cheley CEO/Creative Director ![alt text](https://www.graphtek.com/user_images/Team/Mike_Cheley.png) Ozzy Official Greeter ![alt text](https://www.graphtek.com/user_images/Team/Ozzy.png) Jay Sant Vice President ![alt text](https://www.graphtek.com/user_images/Team/Jay_Sant.png) Shawn Isaac Vice President ![alt text](https://www.graphtek.com/user_images/Team/Shawn_Isaac.png) Jason Gurzi SEM Specialist ![alt text](https://www.graphtek.com/user_images/Team/Jason_Gurzi.png) Yvonne Valles Director of First Impressions ![alt text](https://www.graphtek.com/user_images/Team/Yvonne_Valles.png) Ed Lowell Senior Designer ![alt text](https://www.graphtek.com/user_images/Team/Ed_Lowell.png) Paul Hasas User Interface Designer ![alt text](https://www.graphtek.com/user_images/Team/Paul_Hasas.png) Alan Schmidt Senior Web Developer ![alt text](https://www.graphtek.com/user_images/Team/Alan_Schmidt.png) ",2016-10-22,an-update-to-my-first-python-script,"Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the … ",An Update to my first Python Script,https://www.ryancheley.com/2016/10/22/an-update-to-my-first-python-script/ ryan,technology,"We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server (at some point I’ll write something up about about multiple Django Sites on the same server and part of this will still apply then). How can you do it? Well you’ll want to write yourself some scripts! I have a mix of Python and Shell scripts set up to do this. They are a bit piece meal, but they also allow me to run specific parts of the process without having to try and execute a script with ‘commented’ out pieces. **Python Scripts** create_server.py destroy_droplet.py **Shell Scripts** copy_for_deploy.sh create_db.sh create_server.sh deploy.sh deploy_env_variables.sh install-code.sh setup-server.sh setup_nginx.sh setup_ssl.sh super.sh upload-code.sh The Python script `create_server.py` looks like this: # create_server.py import requests import os from collections import namedtuple from operator import attrgetter from time import sleep Server = namedtuple('Server', 'created ip_address name') doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN'] # Create Droplet headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {doat}', } data = print('>>> Creating Server') requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data) print('>>> Server Created') print('>>> Waiting for Server Stand up') sleep(90) print('>>> Getting Droplet Data') params = ( ('page', '1'), ('per_page', '10'), ) get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params) server_list = [] for d in get_droplets.json()['droplets']: server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name'])) server_list = sorted(server_list, key=attrgetter('created'), reverse=True) server_ip_address = server_list[0].ip_address db_name = os.environ['DJANGO_PG_DB_NAME'] db_username = os.environ['DJANGO_PG_USER_NAME'] if server_ip_address != : print('>>> Run server setup') os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}') print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ') else: print('WARNING: Running Server set up will destroy your current production server. Aborting process') Earlier I said that I liked Digital Ocean because of it’s nice API for interacting with it’s servers (i.e. Droplets). Here we start to see some. The First part of the script uses my Digital Ocean Token and some input parameters to create a Droplet via the Command Line. The `sleep(90)` allows the process to complete before I try and get the IP address. Ninety seconds is a bit longer than is needed, but I figure, better safe than sorry … I’m sure that there’s a way to call to DO and ask if the just created droplet has an IP address, but I haven’t figured it out yet. After we create the droplet AND is has an IP address, we get it to pass to the bash script `server-setup.sh`. # server-setup.sh #!/bin/bash # Create the server on Digital Ocean export SERVER=$1 # Take secret key as 2nd argument if [[ -z ""$1"" ]] then echo ""ERROR: No value set for server ip address1"" exit 1 fi echo -e ""\n>>> Setting up $SERVER"" ssh root@$SERVER /bin/bash << EOF set -e echo -e ""\n>>> Updating apt sources"" apt-get -qq update echo -e ""\n>>> Upgrading apt packages"" apt-get -qq upgrade echo -e ""\n>>> Installing apt packages"" apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx echo -e ""\n>>> Create User to Run Web App"" if getent passwd burningfiddle then echo "">>> User already present"" else adduser --disabled-password --gecos """" burningfiddle echo -e ""\n>>> Add newly created user to www-data"" adduser burningfiddle www-data fi echo -e ""\n>>> Make directory for code to be deployed to"" if [[ ! -d ""/home/burningfiddle/BurningFiddle"" ]] then mkdir /home/burningfiddle/BurningFiddle else echo "">>> Skipping Deploy Folder creation - already present"" fi echo -e ""\n>>> Create VirtualEnv in this directory"" if [[ ! -d ""/home/burningfiddle/venv"" ]] then python3 -m venv /home/burningfiddle/venv else echo "">>> Skipping virtualenv creation - already present"" fi # I don't think i need this anymore echo "">>> Start and Enable gunicorn"" systemctl start gunicorn.socket systemctl enable gunicorn.socket EOF ./setup_nginx.sh $SERVER ./deploy_env_variables.sh $SERVER ./deploy.sh $SERVER All of that stuff we did before, logging into the server and running commands, we’re now doing via a script. What the above does is attempt to keep the server in an idempotent state (that is to say you can run it as many times as you want and you don’t get weird artifacts … if you’re a math nerd you may have heard idempotent in Linear Algebra to describe the multiplication of a matrix by itself and returning the original matrix … same idea here!) The one thing that is new here is the part ssh root@$SERVER /bin/bash << EOF ... EOF A block like that says, “take everything in between `EOF` and run it on the server I just ssh’d into using bash. At the end we run 3 shell scripts: * `setup_nginx.sh` * `deploy_env_variables.sh` * `deploy.sh` Let’s review these scripts The script `setup_nginx.sh` copies several files needed for the `nginx` service: * `gunicorn.service` * `gunicorn.sockets` * `nginx.conf` It then sets up a link between the `available-sites` and `enabled-sites` for `nginx` and finally restarts `nginx` # setup_nginx.sh export SERVER=$1 export sitename=burningfiddle scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/ scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/ scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename ssh root@$SERVER /bin/bash << EOF echo -e "">>> Set up site to be linked in Nginx"" ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled echo -e "">>> Restart Nginx"" systemctl restart nginx echo -e "">>> Allow Nginx Full access"" ufw allow 'Nginx Full' EOF The script `deploy_env_variables.sh` copies environment variables. There are packages (and other methods) that help to manage environment variables better than this, and that is one of the enhancements I’ll be looking at. This script captures the values of various environment variables (one at a time) and then passes them through to the server. It then checks to see if these environment variables exist on the server and will place them in the `/etc/environment` file export SERVER=$1 DJANGO_SECRET_KEY=printenv | grep DJANGO_SECRET_KEY DJANGO_PG_PASSWORD=printenv | grep DJANGO_PG_PASSWORD DJANGO_PG_USER_NAME=printenv | grep DJANGO_PG_USER_NAME DJANGO_PG_DB_NAME=printenv | grep DJANGO_PG_DB_NAME DJANGO_SUPERUSER_PASSWORD=printenv | grep DJANGO_SUPERUSER_PASSWORD DJANGO_DEBUG=False ssh root@$SERVER /bin/bash << EOF if [[ ""\$DJANGO_SECRET_KEY"" != ""$DJANGO_SECRET_KEY"" ]] then echo ""DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY"" >> /etc/environment else echo "">>> Skipping DJANGO_SECRET_KEY - already present"" fi if [[ ""\$DJANGO_PG_PASSWORD"" != ""$DJANGO_PG_PASSWORD"" ]] then echo ""DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_PASSWORD - already present"" fi if [[ ""\$DJANGO_PG_USER_NAME"" != ""$DJANGO_PG_USER_NAME"" ]] then echo ""DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_USER_NAME - already present"" fi if [[ ""\$DJANGO_PG_DB_NAME"" != ""$DJANGO_PG_DB_NAME"" ]] then echo ""DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_DB_NAME - already present"" fi if [[ ""\$DJANGO_DEBUG"" != ""$DJANGO_DEBUG"" ]] then echo ""DJANGO_DEBUG=$DJANGO_DEBUG"" >> /etc/environment else echo "">>> Skipping DJANGO_DEBUG - already present"" fi EOF The `deploy.sh` calls two scripts itself: # deploy.sh #!/bin/bash set -e # Deploy Django project. export SERVER=$1 #./scripts/backup-database.sh ./upload-code.sh ./install-code.sh The final two scripts! The `upload-code.sh` script uploads the files to the `deploy` folder of the server while the `install-code.sh` script move all of the files to where then need to be on the server and restart any services. # upload-code.sh #!/bin/bash set -e echo -e ""\n>>> Copying Django project files to server."" if [[ -z ""$SERVER"" ]] then echo ""ERROR: No value set for SERVER."" exit 1 fi echo -e ""\n>>> Preparing scripts locally."" rm -rf ../../deploy/* rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' ../../BurningFiddle/* ../../deploy echo -e ""\n>>> Copying files to the server."" ssh root@$SERVER ""rm -rf /root/deploy/"" scp -r ../../deploy root@$SERVER:/root/ echo -e ""\n>>> Finished copying Django project files to server."" And finally, # install-code.sh #!/bin/bash # Install Django app on server. set -e echo -e ""\n>>> Installing Django project on server."" if [[ -z ""$SERVER"" ]] then echo ""ERROR: No value set for SERVER."" exit 1 fi echo $SERVER ssh root@$SERVER /bin/bash << EOF set -e echo -e ""\n>>> Activate the Virtual Environment"" source /home/burningfiddle/venv/bin/activate cd /home/burningfiddle/ echo -e ""\n>>> Deleting old files"" rm -rf /home/burningfiddle/BurningFiddle echo -e ""\n>>> Copying new files"" cp -r /root/deploy/ /home/burningfiddle/BurningFiddle echo -e ""\n>>> Installing Python packages"" pip install -r /home/burningfiddle/BurningFiddle/requirements.txt echo -e ""\n>>> Running Django migrations"" python /home/burningfiddle/BurningFiddle/manage.py migrate echo -e ""\n>>> Creating Superuser"" python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true echo -e ""\n>>> Load Initial Data"" python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json echo -e ""\n>>> Collecting static files"" python /home/burningfiddle/BurningFiddle/manage.py collectstatic echo -e ""\n>>> Reloading Gunicorn"" systemctl daemon-reload systemctl restart gunicorn EOF echo -e ""\n>>> Finished installing Django project on server."" ",2021-02-21,automating-the-deployment,"We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server … ",Automating the deployment,https://www.ryancheley.com/2021/02/21/automating-the-deployment/ ryan,technology,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what the heck was going on. Part of what I wrote before was correct. I wasn’t able to read the `client_secrets.json` file and that was leading to an error. I was _not_ correct on the creation of the `create_mp4.sh` though. The reason I got it to run automatically that night was because I had, in my testing, created the `create_mp4.sh` and when cron ran my `run_script.sh` it was able to use what was already there. The next night when it ran, the `create_mp4.sh` was already there, but the `h264` files that were referenced in it weren’t. This lead to no video being uploaded and me being confused. The issue was that cron was unable to run the part of the script that generates the script to create the `mp4` file. I’m close to having a fix for that, but for now I did the most inelegant thing possible. I broke up the script in cron so it looks like this: 00 06 * * * /home/pi/Documents/python_projects/cleanup.sh 10 19 * * * /home/pi/Documents/python_projects/create_script_01.sh 11 19 * * * /home/pi/Documents/python_projects/create_script_02.sh >> $HOME/Documents/python_projects/create_mp4.sh 2>&1 12 19 * * * /home/pi/Documents/python_projects/create_script_03.sh 13 19 * * * /home/pi/Documents/python_projects/run_script.sh At 6am every morning the `cleanup.sh` runs and removes the `h264` files, the `mp4` file and the `create_mp4.sh` script At 7:10pm the ‘[header](https://gist.github.com/ryancheley/5b11cc15160f332811a3b3d04edf3780)’ for the `create_mp4.sh` runs. At 7:11pm the ‘[body](https://gist.github.com/ryancheley/9e502a9f1ed94e29c4d684fa9a8c035a)’ for `create_mp4.sh` runs. At 7:12pm the ‘[footer](https://gist.github.com/ryancheley/3c91a4b27094c365b121a9dc694c3486)’ for `create_mp4.sh` runs. Finally at 7:13pm the `run_script.sh` compiles the `h264` files into an `mp4` and uploads it to YouTube. Last night while I was at a School Board meeting the whole process ran on it’s own. I was super pumped when I checked my YouTube channel and saw that the May 1 hummingbird video was there and I didn’t have to do anything. ",2018-05-02,automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what … ",Automating the Hummingbird Video Upload to YouTube or How I finally got Cron to do what I needed it to do but in the ugliest way possible,https://www.ryancheley.com/2018/05/02/automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to False. If set to True you can show items that have dates that are in the future where the future is anything after the current date/time on the server. * date_field: the field that the view will use to filter the date on. If this is not set an error will be generated * uses_datetime_field: Convert a date into a datetime when the date field is a DateTimeField. When time zone support is enabled, `date` is assumed to be in the current time zone, so that displayed items are consistent with the URL. ### BaseDateListView Attributes * allow_empty: Defaults to `False`. This means that if there is no data a `404` error will be returned with the message > > `No __str__ Available` where ‘`__str__`’ is the display of your model * date_list_period: This attribute allows you to break down by a specific period of time (years, months, days, etc.) and group your date driven items by the period specified. See below for implementation For `year` views.py date_list_period='year' urls.py Nothing special needs to be done \.html {% block content %}
{% for date in date_list %} {{ date.year }}
    {% for p in person %} {% if date.year == p.post_date.year %}
  • {{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}
  • {% endif %} {% endfor %}
{% endfor %}
{% endblock %} Will render: ![Rendered Archive Index View](/images/uploads/2019/11/634B59DC-6BA6-4C5F-B969-E8B924123FFA.jpeg) For `month` views.py date_list_period='month' urls.py Nothing special needs to be done \.html {% block content %}
{% for date in date_list %} {{ date.month }}
    {% for p in person %} {% if date.month == p.post_date.month %}
  • {{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}
  • {% endif %} {% endfor %}
{% endfor %}
{% endblock %} Will render: ![BaseArchiveIndexView](/images/uploads/2019/11/04B40CD4-3B85-440D-810D-4050727D6120.jpeg) ### BaseArchiveIndexView Attributes * context_object_name: Name the object used in the template. As stated before, you’re going to want to do this so you don’t hate yourself (or have other developers hate you). ## Other Attributes ### MultipleObjectMixin Attributes These attributes were all reviewed in the [ListView](/cbv-listview.html) post * model = None * ordering = None * page_kwarg = 'page' * paginate_by = None * paginate_orphans = 0 * paginator_class = \ * queryset = None ### TemplateResponseMixin Attributes This attribute was reviewed in the [ListView](/cbv-listview.html) post * content_type = None ### ContextMixin Attributes This attribute was reviewed in the [ListView](/cbv-listview.html) post * extra_context = None ### View Attributes This attribute was reviewed in the [View](/cbv-view.html) post * http_method_names = ['get', 'post', 'put', 'patch', 'delete', 'head', 'options', 'trace'] ### TemplateResponseMixin Attributes These attributes were all reviewed in the [ListView](/cbv-listview.html) post * response_class = \ * template_engine = None * template_name = None ## Diagram A visual representation of how `ArchiveIndexView` is derived can be seen here: ![ArchiveIndexView](https://yuml.me/diagram/plain;/class/%5BMultipleObjectTemplateResponseMixin%7Bbg:white%7D%5D%5E-%5BArchiveIndexView%7Bbg:green%7D%5D,%20%5BTemplateResponseMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectTemplateResponseMixin%7Bbg:white%7D%5D,%20%5BBaseArchiveIndexView%7Bbg:white%7D%5D%5E-%5BArchiveIndexView%7Bbg:green%7D%5D,%20%5BBaseDateListView%7Bbg:white%7D%5D%5E-%5BBaseArchiveIndexView%7Bbg:white%7D%5D,%20%5BMultipleObjectMixin%7Bbg:white%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D,%20%5BContextMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectMixin%7Bbg:white%7D%5D,%20%5BDateMixin%7Bbg:white%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D,%20%5BView%7Bbg:lightblue%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D.svg) ## Conclusion With date driven data (articles, blogs, etc.) The `ArchiveIndexView` is a great CBV and super easy to implement. ",2019-11-24,cbv-archiveindexview,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to … ",CBV - ArchiveIndexView,https://www.ryancheley.com/2019/11/24/cbv-archiveindexview/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView or other views representing lists of > objects. Almost all of the functionality of `BaseListView` comes from the `MultipleObjectMixin`. Since the Django Docs specifically say don’t use this directly, I won’t go into it too much. ## Diagram A visual representation of how `BaseListView` is derived can be seen here: ![BaseListView](https://yuml.me/diagram/plain;/class/%5BMultipleObjectMixin%7Bbg:white%7D%5D%5E-%5BBaseListView%7Bbg:green%7D%5D,%20%5BContextMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectMixin%7Bbg:white%7D%5D,%20%5BView%7Bbg:lightblue%7D%5D%5E-%5BBaseListView%7Bbg:green%7D%5D.svg) ## Conclusion Don’t use this. It should be subclassed into a usable view (a la `ListView`). There are many **Base** views that are ancestors for other views. I’m not going to cover any more of them going forward **UNLESS** the documentation says there’s a specific reason to. ",2019-11-17,cbv-baselistview,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView … ",CBV - BaseListView,https://www.ryancheley.com/2019/11/17/cbv-baselistview/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/) `CreateView` > > View for creating a new object, with a response rendered by a template. ## Attributes Three attributes are required to get the template to render. Two we’ve seen before (`queryset` and `template_name`). The new one we haven’t see before is the `fields` attribute. * fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields ## Example views.py queryset = Person.objects.all() fields = '__all__' template_name = 'rango/person_form.html' urls.py path('create_view/', views.myCreateView.as_view(), name='create_view'), \