author,category,content,published_date,slug,summary,title,url
ryan,technology,"Over the long holiday weekend I had the opportunity to play around a bit with
some of my Raspberry Pi scripts and try to do some fine tuning.
I mostly failed in getting anything to run better, but I did discover that not
having my code in version control was a bad idea. (Duh)
I spent the better part of an hour trying to find a script that I had
accidentally deleted somewhere in my blog. Turns out it was (mostly) there,
but it didn’t ‘feel’ right … though I’m not sure why.
I was able to restore the file from my blog archive, but I decided that was a
dumb way to live and given that
1. I use version control at work (and have for the last 15 years)
2. I’ve used it for other personal projects
However, I’ve only ever used a GUI version of either subversion (at work) or
GitHub (for personal projects via PyCharm). I’ve never used it from the
command line.
And so, with a bit of time on my hands I dove in to see what needed to be
done.
Turns out, not much. I used this
[GitHub](https://help.github.com/articles/adding-an-existing-project-to-
github-using-the-command-line/) resource to get me what I needed. Only a
couple of commands and I was in business.
The problem is that I have a terrible memory and this isn’t something I’m
going to do very often. So, I decided to write a bash script to encapsulate
all of the commands and help me out a bit.
The script looks like this:
echo ""Enter your commit message:""
read commit_msg
git commit -m ""$commit_msg""
git remote add origin path/to/repository
git remote -v
git push -u origin master
git add $1
echo ”enter your commit message:”
read commit_msg
git commit -m ”$commit_msg”
git push
I just recently learned about user input in bash scripts and was really
excited about the opportunity to be able to use it. Turns out it didn’t take
long to try it out! (God I love learning things!)
What the script does is commits the files that have been changed (all of
them), adds it to the origin on the GitHub repo that has been specified,
prints verbose logging to the screen (so I can tell what I’ve messed up if it
happens) and then pushes the changes to the master.
This script doesn’t allow you to specify what files to commit, nor does it
allow for branching and tagging … but I don’t need those (yet).
I added this script to 3 of my projects, each of which can be found in the
following GitHub Repos:
* [rpicamera-hummingbird](https://github.com/ryancheley/rpicamera-hummingbird)
* [rpi-dodgers](https://github.com/ryancheley/rpi-dodgers)
* [rpi-kings](https://github.com/ryancheley/rpi-kings)
I had to make the commit.sh executable (with `chmod +x commit.sh`) but other
than that it’s basically plug and play.
## Addendum
I made a change to my Kings script tonight (Nov 27) and it wouldn’t get pushed
to git. After a bit of Googling and playing around, I determined that the
original script would only push changes to an empty repo ... not one with
stuff, like I had. Changes made to the post (and the GitHub repo!)
",2018-11-25,adding-my-raspberry-pi-project-code-to-github,"Over the long holiday weekend I had the opportunity to play around a bit with
some of my Raspberry Pi scripts and try to do some fine tuning.
I mostly failed in getting anything to run better, but I did discover that not
having my code in version control was …
",Adding my Raspberry Pi Project code to GitHub,https://www.ryancheley.com/2018/11/25/adding-my-raspberry-pi-project-code-to-github/
ryan,technology,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to
[Pelican](https://getpelican.com). I did this for a couple of reasons (see my
post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/)), but one thing that I was a bit worried about when I migrated was
that Pelican's offering for site search didn't look promising.
There was an outdated plugin called [tipue-search](https://github.com/pelican-
plugins/tipue-search) but when I was looking at it I could tell it was on it's
last legs.
I thought about it, and since my blag isn't super high trafficked AND you can
use google to search a specific site, I could wait a bit and see what options
came up.
After waiting a few months, I decided it would be interesting to see if I
could write a SQLite utility to get the data from my blog, add it to a SQLite
database and then use [datasette](https://datasette.io) to serve it up.
I wrote the beginning scaffolding for it last August in a utility called
[pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/0.1/), but I
ran into several technical issues I just couldn't overcome. I thought about
giving up, but sometimes you just need to take a step away from a thing,
right?
After the first of the year I decided to revisit my idea, but first looked to
see if there was anything new for Pelican search. I found a tool plugin called
[search](https://github.com/pelican-plugins/search) that was released last
November and is actively being developed, but as I read through the
documentation there was just **A LOT** of stuff:
* stork
* requirements for the structure of your page html
* static asset hosting
* deployment requires updating your `nginx` settings
These all looked a bit scary to me, and since I've done some work using
[datasette](https://datasette.io) I thought I'd revisit my initial idea.
## My First Attempt
As I mentioned above, I wrote the beginning scaffolding late last summer. In
my first attempt I tried to use a few tools to read the `md` files and parse
their `yaml` structure and it just didn't work out. I also realized that
`Pelican` can have [reStructured Text](https://www.sphinx-
doc.org/en/master/usage/restructuredtext/basics.html) and that any attempt to
parse just the `md` file would never work for those file types.
## My Second Attempt
### The Plugin
During the holiday I thought a bit about approaching the problem from a
different perspective. My initial idea was to try and write a `datasette`
style package to read the data from `pelican`. I decided instead to see if I
could write a `pelican` plugin to get the data and then add it to a SQLite
database. It turns out, I can, and it's not that hard.
Pelican uses `signals` to make plugin in creation a pretty easy thing. I read
a [post](https://blog.geographer.fr/pelican-plugins) and the
[documentation](https://docs.getpelican.com/en/latest/plugins.html) and was
able to start my effort to refactor `pelican-to-sqlite`.
From [The missing Pelican plugins guide](https://blog.geographer.fr/pelican-
plugins) I saw lots of different options, but realized that the signal
`article_generator_write_article` is what I needed to get the article content
that I needed.
I then also used `sqlite_utils` to insert the data into a database table.
def save_items(record: dict, table: str, db: sqlite_utils.Database) -> None: # pragma: no cover
db[table].insert(record, pk=""slug"", alter=True, replace=True)
Below is the method I wrote to take the content and turn it into a dictionary
which can be used in the `save_items` method above.
def create_record(content) -> dict:
record = {}
author = content.author.name
category = content.category.name
post_content = html2text.html2text(content.content)
published_date = content.date.strftime(""%Y-%m-%d"")
slug = content.slug
summary = html2text.html2text(content.summary)
title = content.title
url = ""https://www.ryancheley.com/"" + content.url
status = content.status
if status == ""published"":
record = {
""author"": author,
""category"": category,
""content"": post_content,
""published_date"": published_date,
""slug"": slug,
""summary"": summary,
""title"": title,
""url"": url,
}
return record
Putting these together I get a method used by the Pelican Plugin system that
will generate the data I need for the site AND insert it into a SQLite
database
def run(_, content):
record = create_record(content)
save_items(record, ""content"", db)
def register():
signals.article_generator_write_article.connect(run)
### The html template update
I use a custom implementation of [Smashing
Magazine](https://www.smashingmagazine.com/2009/08/designing-a-html-5-layout-
from-scratch/). This allows me to do some edits, though I mostly keep it
pretty stock. However, this allowed me to make a small edit to the `base.html`
template to include a search form.
In order to add the search form I added the following code to `base.html`
below the `nav` tag:
### Putting it all together with datasette and Vercel
Here's where the **magic** starts. Publishing data to Vercel with `datasette`
is extremely easy with the `datasette` plugin [`datasette-publish-
vercel`](https://pypi.org/project/datasette-publish-vercel/).
You do need to have the [Vercel cli installed](https://vercel.com/cli), but
once you do, the steps for publishing your SQLite database is really well
explained in the `datasette-publish-vercel`
[documentation](https://github.com/simonw/datasette-publish-
vercel/blob/main/README.md).
One final step to do was to add a `MAKE` command so I could just type a quick
command which would create my content, generate the SQLite database AND
publish the SQLite database to Vercel. I added the below to my `Makefile`:
vercel:
{ \
echo ""Generate content and database""; \
make html; \
echo ""Content generation complete""; \
echo ""Publish data to vercel""; \
datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \
echo ""Publishing complete""; \
}
The line
datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \
has an extra flag passed to it (`--metadata`) which allows me to use
`metadata.json` to create a saved query which I call `article_search`. The
contents of that saved query are:
select summary as 'Summary', url as 'URL', published_date as 'Published Data' from content where content like '%' || :text || '%' order by published_date
This is what allows the `action` in the `form` above to have a URL to link to
in `datasette` and return data!
With just a few tweaks I'm able to include a search tool, powered by datasette
for my pelican blog. Needless to say, I'm pretty pumped.
## Next Steps
There are still a few things to do:
1. separate search form html file (for my site)
2. formatting `datasette` to match site (for my vercel powered instance of `datasette`)
3. update the README for `pelican-to-sqlite` package to better explain how to fully implement
4. Get `pelican-to-sqlite` added to the [pelican-plugins page](https://github.com/pelican-plugins/)
",2022-01-16,adding-search-to-my-pelican-blog-with-datasette,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to
[Pelican](https://getpelican.com). I did this for a couple of reasons (see my
post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/)), but one thing that I was a bit worried about when I migrated was
that Pelican's offering for site search didn't look promising.
There was an outdated plugin …
",Adding Search to My Pelican Blog with Datasette,https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/
ryan,technology,"Nothing can ever really be considered **done** when you're talking about
programming, right?
I decided to try and add images to the [python script I wrote last
week](https://github.com/miloardot/python-
files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it,
with not too much hassel.
The first thing I decided to do was to update the code on `pythonista` on my
iPad Pro and verify that it would run.
It took some doing (mostly because I _forgot_ that the attributes in an `img`
tag included what I needed ... initially I was trying to programmatically get
the name of the person from the image file itelf using [regular
expressions](https://en.wikipedia.org/wiki/Regular_expression) ... it didn't
work out well).
Once that was done I branched the `master` on GitHub into a `development`
branch and copied the changes there. Once that was done I performed a **pull
request** on the macOS GitHub Desktop Application.
Finally, I used the macOS GitHub app to merge my **pull request** from
`development` into `master` and now have the changes.
The updated script will now also get the image data to display into the multi
markdown table:
| Name | Title | Image |
| --- | --- | --- |
|Mike Cheley|CEO/Creative Director||
|Ozzy|Official Greeter||
|Jay Sant|Vice President||
|Shawn Isaac|Vice President||
|Jason Gurzi|SEM Specialist||
|Yvonne Valles|Director of First Impressions||
|Ed Lowell|Senior Designer||
|Paul Hasas|User Interface Designer||
|Alan Schmidt|Senior Web Developer||
Which gets displayed as this:
Name Title Image
* * *
Mike Cheley CEO/Creative Director  Ozzy Official
Greeter  Jay
Sant Vice President  Shawn Isaac Vice
President  Jason Gurzi
SEM Specialist  Yvonne Valles
Director of First Impressions  Ed Lowell
Senior Designer  Paul Hasas User
Interface Designer  Alan Schmidt
Senior Web Developer 
",2016-10-22,an-update-to-my-first-python-script,"Nothing can ever really be considered **done** when you're talking about
programming, right?
I decided to try and add images to the [python script I wrote last
week](https://github.com/miloardot/python-
files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it,
with not too much hassel.
The first thing I decided to do was to update the …
",An Update to my first Python Script,https://www.ryancheley.com/2016/10/22/an-update-to-my-first-python-script/
ryan,technology,"We got everything set up, and now we want to automate the deployment.
Why would we want to do this you ask? Let’s say that you’ve decided that you
need to set up a test version of your site (what some might call UAT) on a new
server (at some point I’ll write something up about about multiple Django
Sites on the same server and part of this will still apply then). How can you
do it?
Well you’ll want to write yourself some scripts!
I have a mix of Python and Shell scripts set up to do this. They are a bit
piece meal, but they also allow me to run specific parts of the process
without having to try and execute a script with ‘commented’ out pieces.
**Python Scripts**
create_server.py
destroy_droplet.py
**Shell Scripts**
copy_for_deploy.sh
create_db.sh
create_server.sh
deploy.sh
deploy_env_variables.sh
install-code.sh
setup-server.sh
setup_nginx.sh
setup_ssl.sh
super.sh
upload-code.sh
The Python script `create_server.py` looks like this:
# create_server.py
import requests
import os
from collections import namedtuple
from operator import attrgetter
from time import sleep
Server = namedtuple('Server', 'created ip_address name')
doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN']
# Create Droplet
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {doat}',
}
data =
print('>>> Creating Server')
requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data)
print('>>> Server Created')
print('>>> Waiting for Server Stand up')
sleep(90)
print('>>> Getting Droplet Data')
params = (
('page', '1'),
('per_page', '10'),
)
get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params)
server_list = []
for d in get_droplets.json()['droplets']:
server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name']))
server_list = sorted(server_list, key=attrgetter('created'), reverse=True)
server_ip_address = server_list[0].ip_address
db_name = os.environ['DJANGO_PG_DB_NAME']
db_username = os.environ['DJANGO_PG_USER_NAME']
if server_ip_address != :
print('>>> Run server setup')
os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}')
print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ')
else:
print('WARNING: Running Server set up will destroy your current production server. Aborting process')
Earlier I said that I liked Digital Ocean because of it’s nice API for
interacting with it’s servers (i.e. Droplets). Here we start to see some.
The First part of the script uses my Digital Ocean Token and some input
parameters to create a Droplet via the Command Line. The `sleep(90)` allows
the process to complete before I try and get the IP address. Ninety seconds is
a bit longer than is needed, but I figure, better safe than sorry … I’m sure
that there’s a way to call to DO and ask if the just created droplet has an IP
address, but I haven’t figured it out yet.
After we create the droplet AND is has an IP address, we get it to pass to the
bash script `server-setup.sh`.
# server-setup.sh
#!/bin/bash
# Create the server on Digital Ocean
export SERVER=$1
# Take secret key as 2nd argument
if [[ -z ""$1"" ]]
then
echo ""ERROR: No value set for server ip address1""
exit 1
fi
echo -e ""\n>>> Setting up $SERVER""
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e ""\n>>> Updating apt sources""
apt-get -qq update
echo -e ""\n>>> Upgrading apt packages""
apt-get -qq upgrade
echo -e ""\n>>> Installing apt packages""
apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx
echo -e ""\n>>> Create User to Run Web App""
if getent passwd burningfiddle
then
echo "">>> User already present""
else
adduser --disabled-password --gecos """" burningfiddle
echo -e ""\n>>> Add newly created user to www-data""
adduser burningfiddle www-data
fi
echo -e ""\n>>> Make directory for code to be deployed to""
if [[ ! -d ""/home/burningfiddle/BurningFiddle"" ]]
then
mkdir /home/burningfiddle/BurningFiddle
else
echo "">>> Skipping Deploy Folder creation - already present""
fi
echo -e ""\n>>> Create VirtualEnv in this directory""
if [[ ! -d ""/home/burningfiddle/venv"" ]]
then
python3 -m venv /home/burningfiddle/venv
else
echo "">>> Skipping virtualenv creation - already present""
fi
# I don't think i need this anymore
echo "">>> Start and Enable gunicorn""
systemctl start gunicorn.socket
systemctl enable gunicorn.socket
EOF
./setup_nginx.sh $SERVER
./deploy_env_variables.sh $SERVER
./deploy.sh $SERVER
All of that stuff we did before, logging into the server and running commands,
we’re now doing via a script. What the above does is attempt to keep the
server in an idempotent state (that is to say you can run it as many times as
you want and you don’t get weird artifacts … if you’re a math nerd you may
have heard idempotent in Linear Algebra to describe the multiplication of a
matrix by itself and returning the original matrix … same idea here!)
The one thing that is new here is the part
ssh root@$SERVER /bin/bash << EOF
...
EOF
A block like that says, “take everything in between `EOF` and run it on the
server I just ssh’d into using bash.
At the end we run 3 shell scripts:
* `setup_nginx.sh`
* `deploy_env_variables.sh`
* `deploy.sh`
Let’s review these scripts
The script `setup_nginx.sh` copies several files needed for the `nginx`
service:
* `gunicorn.service`
* `gunicorn.sockets`
* `nginx.conf`
It then sets up a link between the `available-sites` and `enabled-sites` for
`nginx` and finally restarts `nginx`
# setup_nginx.sh
export SERVER=$1
export sitename=burningfiddle
scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/
scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/
scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename
ssh root@$SERVER /bin/bash << EOF
echo -e "">>> Set up site to be linked in Nginx""
ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
echo -e "">>> Restart Nginx""
systemctl restart nginx
echo -e "">>> Allow Nginx Full access""
ufw allow 'Nginx Full'
EOF
The script `deploy_env_variables.sh` copies environment variables. There are
packages (and other methods) that help to manage environment variables better
than this, and that is one of the enhancements I’ll be looking at.
This script captures the values of various environment variables (one at a
time) and then passes them through to the server. It then checks to see if
these environment variables exist on the server and will place them in the
`/etc/environment` file
export SERVER=$1
DJANGO_SECRET_KEY=printenv | grep DJANGO_SECRET_KEY
DJANGO_PG_PASSWORD=printenv | grep DJANGO_PG_PASSWORD
DJANGO_PG_USER_NAME=printenv | grep DJANGO_PG_USER_NAME
DJANGO_PG_DB_NAME=printenv | grep DJANGO_PG_DB_NAME
DJANGO_SUPERUSER_PASSWORD=printenv | grep DJANGO_SUPERUSER_PASSWORD
DJANGO_DEBUG=False
ssh root@$SERVER /bin/bash << EOF
if [[ ""\$DJANGO_SECRET_KEY"" != ""$DJANGO_SECRET_KEY"" ]]
then
echo ""DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY"" >> /etc/environment
else
echo "">>> Skipping DJANGO_SECRET_KEY - already present""
fi
if [[ ""\$DJANGO_PG_PASSWORD"" != ""$DJANGO_PG_PASSWORD"" ]]
then
echo ""DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_PASSWORD - already present""
fi
if [[ ""\$DJANGO_PG_USER_NAME"" != ""$DJANGO_PG_USER_NAME"" ]]
then
echo ""DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_USER_NAME - already present""
fi
if [[ ""\$DJANGO_PG_DB_NAME"" != ""$DJANGO_PG_DB_NAME"" ]]
then
echo ""DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_DB_NAME - already present""
fi
if [[ ""\$DJANGO_DEBUG"" != ""$DJANGO_DEBUG"" ]]
then
echo ""DJANGO_DEBUG=$DJANGO_DEBUG"" >> /etc/environment
else
echo "">>> Skipping DJANGO_DEBUG - already present""
fi
EOF
The `deploy.sh` calls two scripts itself:
# deploy.sh
#!/bin/bash
set -e
# Deploy Django project.
export SERVER=$1
#./scripts/backup-database.sh
./upload-code.sh
./install-code.sh
The final two scripts!
The `upload-code.sh` script uploads the files to the `deploy` folder of the
server while the `install-code.sh` script move all of the files to where then
need to be on the server and restart any services.
# upload-code.sh
#!/bin/bash
set -e
echo -e ""\n>>> Copying Django project files to server.""
if [[ -z ""$SERVER"" ]]
then
echo ""ERROR: No value set for SERVER.""
exit 1
fi
echo -e ""\n>>> Preparing scripts locally.""
rm -rf ../../deploy/*
rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' ../../BurningFiddle/* ../../deploy
echo -e ""\n>>> Copying files to the server.""
ssh root@$SERVER ""rm -rf /root/deploy/""
scp -r ../../deploy root@$SERVER:/root/
echo -e ""\n>>> Finished copying Django project files to server.""
And finally,
# install-code.sh
#!/bin/bash
# Install Django app on server.
set -e
echo -e ""\n>>> Installing Django project on server.""
if [[ -z ""$SERVER"" ]]
then
echo ""ERROR: No value set for SERVER.""
exit 1
fi
echo $SERVER
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e ""\n>>> Activate the Virtual Environment""
source /home/burningfiddle/venv/bin/activate
cd /home/burningfiddle/
echo -e ""\n>>> Deleting old files""
rm -rf /home/burningfiddle/BurningFiddle
echo -e ""\n>>> Copying new files""
cp -r /root/deploy/ /home/burningfiddle/BurningFiddle
echo -e ""\n>>> Installing Python packages""
pip install -r /home/burningfiddle/BurningFiddle/requirements.txt
echo -e ""\n>>> Running Django migrations""
python /home/burningfiddle/BurningFiddle/manage.py migrate
echo -e ""\n>>> Creating Superuser""
python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true
echo -e ""\n>>> Load Initial Data""
python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json
echo -e ""\n>>> Collecting static files""
python /home/burningfiddle/BurningFiddle/manage.py collectstatic
echo -e ""\n>>> Reloading Gunicorn""
systemctl daemon-reload
systemctl restart gunicorn
EOF
echo -e ""\n>>> Finished installing Django project on server.""
",2021-02-21,automating-the-deployment,"We got everything set up, and now we want to automate the deployment.
Why would we want to do this you ask? Let’s say that you’ve decided that you
need to set up a test version of your site (what some might call UAT) on a new
server …
",Automating the deployment,https://www.ryancheley.com/2021/02/21/automating-the-deployment/
ryan,technology,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had
_finally_ gotten Cron to automate the entire process of compiling the `h264`
files into an `mp4` and uploading it to [YouTube](https://www.youtube.com).
I hadn’t. And it took the better part of the last 2 weeks to figure out what
the heck was going on.
Part of what I wrote before was correct. I wasn’t able to read the
`client_secrets.json` file and that was leading to an error.
I was _not_ correct on the creation of the `create_mp4.sh` though.
The reason I got it to run automatically that night was because I had, in my
testing, created the `create_mp4.sh` and when cron ran my `run_script.sh` it
was able to use what was already there.
The next night when it ran, the `create_mp4.sh` was already there, but the
`h264` files that were referenced in it weren’t. This lead to no video being
uploaded and me being confused.
The issue was that cron was unable to run the part of the script that
generates the script to create the `mp4` file.
I’m close to having a fix for that, but for now I did the most inelegant thing
possible. I broke up the script in cron so it looks like this:
00 06 * * * /home/pi/Documents/python_projects/cleanup.sh
10 19 * * * /home/pi/Documents/python_projects/create_script_01.sh
11 19 * * * /home/pi/Documents/python_projects/create_script_02.sh >> $HOME/Documents/python_projects/create_mp4.sh 2>&1
12 19 * * * /home/pi/Documents/python_projects/create_script_03.sh
13 19 * * * /home/pi/Documents/python_projects/run_script.sh
At 6am every morning the `cleanup.sh` runs and removes the `h264` files, the
`mp4` file and the `create_mp4.sh` script
At 7:10pm the
‘[header](https://gist.github.com/ryancheley/5b11cc15160f332811a3b3d04edf3780)’
for the `create_mp4.sh` runs. At 7:11pm the
‘[body](https://gist.github.com/ryancheley/9e502a9f1ed94e29c4d684fa9a8c035a)’
for `create_mp4.sh` runs. At 7:12pm the
‘[footer](https://gist.github.com/ryancheley/3c91a4b27094c365b121a9dc694c3486)’
for `create_mp4.sh` runs.
Finally at 7:13pm the `run_script.sh` compiles the `h264` files into an `mp4`
and uploads it to YouTube.
Last night while I was at a School Board meeting the whole process ran on it’s
own. I was super pumped when I checked my YouTube channel and saw that the May
1 hummingbird video was there and I didn’t have to do anything.
",2018-05-02,automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had
_finally_ gotten Cron to automate the entire process of compiling the `h264`
files into an `mp4` and uploading it to [YouTube](https://www.youtube.com).
I hadn’t. And it took the better part of the last 2 weeks to figure out what …
",Automating the Hummingbird Video Upload to YouTube or How I finally got Cron to do what I needed it to do but in the ugliest way possible,https://www.ryancheley.com/2018/05/02/automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/)
`ArchiveIndexView`
> > Top-level archive of date-based items.
## Attributes
There are 20 attributes that can be set for the `ArchiveIndexView` but most of
them are based on ancestral Classes of the CBV so we won’t be going into them
in Detail.
### DateMixin Attributes
* allow_future: Defaults to False. If set to True you can show items that have dates that are in the future where the future is anything after the current date/time on the server.
* date_field: the field that the view will use to filter the date on. If this is not set an error will be generated
* uses_datetime_field: Convert a date into a datetime when the date field is a DateTimeField. When time zone support is enabled, `date` is assumed to be in the current time zone, so that displayed items are consistent with the URL.
### BaseDateListView Attributes
* allow_empty: Defaults to `False`. This means that if there is no data a `404` error will be returned with the message
> > `No __str__ Available` where ‘`__str__`’ is the display of your model
* date_list_period: This attribute allows you to break down by a specific period of time (years, months, days, etc.) and group your date driven items by the period specified. See below for implementation
For `year`
views.py
date_list_period='year'
urls.py
Nothing special needs to be done
\.html
{% block content %}
{% for date in date_list %}
{{ date.year }}
{% for p in person %}
{% if date.year == p.post_date.year %}
{% endblock %}
Will render:

For `month`
views.py
date_list_period='month'
urls.py
Nothing special needs to be done
\.html
{% block content %}
{% for date in date_list %}
{{ date.month }}
{% for p in person %}
{% if date.month == p.post_date.month %}
{% endblock %}
Will render:

### BaseArchiveIndexView Attributes
* context_object_name: Name the object used in the template. As stated before, you’re going to want to do this so you don’t hate yourself (or have other developers hate you).
## Other Attributes
### MultipleObjectMixin Attributes
These attributes were all reviewed in the [ListView](/cbv-listview.html) post
* model = None
* ordering = None
* page_kwarg = 'page'
* paginate_by = None
* paginate_orphans = 0
* paginator_class = \
* queryset = None
### TemplateResponseMixin Attributes
This attribute was reviewed in the [ListView](/cbv-listview.html) post
* content_type = None
### ContextMixin Attributes
This attribute was reviewed in the [ListView](/cbv-listview.html) post
* extra_context = None
### View Attributes
This attribute was reviewed in the [View](/cbv-view.html) post
* http_method_names = ['get', 'post', 'put', 'patch', 'delete', 'head', 'options', 'trace']
### TemplateResponseMixin Attributes
These attributes were all reviewed in the [ListView](/cbv-listview.html) post
* response_class = \
* template_engine = None
* template_name = None
## Diagram
A visual representation of how `ArchiveIndexView` is derived can be seen here:

## Conclusion
With date driven data (articles, blogs, etc.) The `ArchiveIndexView` is a
great CBV and super easy to implement.
",2019-11-24,cbv-archiveindexview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/)
`ArchiveIndexView`
> > Top-level archive of date-based items.
## Attributes
There are 20 attributes that can be set for the `ArchiveIndexView` but most of
them are based on ancestral Classes of the CBV so we won’t be going into them
in Detail.
### DateMixin Attributes
* allow_future: Defaults to …
",CBV - ArchiveIndexView,https://www.ryancheley.com/2019/11/24/cbv-archiveindexview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/)
`BaseListView`
> > A base view for displaying a list of objects.
And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-
based-views/generic-display/#listview):
> > A base view for displaying a list of objects. It is not intended to be
> used directly, but rather as a parent class of the
> django.views.generic.list.ListView or other views representing lists of
> objects.
Almost all of the functionality of `BaseListView` comes from the
`MultipleObjectMixin`. Since the Django Docs specifically say don’t use this
directly, I won’t go into it too much.
## Diagram
A visual representation of how `BaseListView` is derived can be seen here:

## Conclusion
Don’t use this. It should be subclassed into a usable view (a la `ListView`).
There are many **Base** views that are ancestors for other views. I’m not
going to cover any more of them going forward **UNLESS** the documentation
says there’s a specific reason to.
",2019-11-17,cbv-baselistview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/)
`BaseListView`
> > A base view for displaying a list of objects.
And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-
based-views/generic-display/#listview):
> > A base view for displaying a list of objects. It is not intended to be
> used directly, but rather as a parent class of the
> django.views.generic.list.ListView …
",CBV - BaseListView,https://www.ryancheley.com/2019/11/17/cbv-baselistview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/)
`CreateView`
> > View for creating a new object, with a response rendered by a template.
## Attributes
Three attributes are required to get the template to render. Two we’ve seen
before (`queryset` and `template_name`). The new one we haven’t see before is
the `fields` attribute.
* fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields
## Example
views.py
queryset = Person.objects.all()
fields = '__all__'
template_name = 'rango/person_form.html'
urls.py
path('create_view/', views.myCreateView.as_view(), name='create_view'),
\.html
{% extends 'base.html' %}
{% block title %}
{{ title }}
{% endblock %}
{% block content %}
{{ type }} View
{% endblock %}
## Diagram
A visual representation of how `CreateView` is derived can be seen here:

## Conclusion
A simple way to implement a form to create items for a model. We’ve completed
step 1 for a basic **C** RUD application.
",2019-12-01,cbv-createview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/)
`CreateView`
> > View for creating a new object, with a response rendered by a template.
## Attributes
Three attributes are required to get the template to render. Two we’ve seen
before (`queryset` and `template_name`). The new one we haven’t see before is
the `fields` attribute …
",CBV - CreateView,https://www.ryancheley.com/2019/12/01/cbv-createview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/)
`DayArchiveView`
> > List of objects published on a given day.
## Attributes
There are six new attributes to review here … well really 3 new ones and then
a formatting attribute for each of these 3:
* day: The day to be viewed
* day_format: The format of the day to be passed. Defaults to `%d`
* month: The month to be viewed
* month_format: The format of the month to be passed. Defaults to `%b`
* year: The year to be viewed
* year_format: The format of the year to be passed. Defaults to `%Y`
## Required Attributes
* day
* month
* year
* date_field: The field that holds the date that will drive every else. We saw this in [ArchiveIndexView](/cbv-archiveindexview)
Additionally you also need `model` or `queryset`
The `day`, `month`, and `year` can be passed via `urls.py` so that they do’t
need to be specified in the view itself.
## Example:
views.py
class myDayArchiveView(DayArchiveView):
month_format = '%m'
date_field = 'post_date'
queryset = Person.objects.all()
context_object_name = 'person'
paginate_by = 10
page_kwarg = 'name'
urls.py
path('day_archive_view////', views.myDayArchiveView.as_view(), name='day_archive_view'),
\_archiveday.html
{% extends 'base.html' %}
{% endblock %}
## Diagram
A visual representation of how `DayArchiveView` is derived can be seen here:

## Conclusion
If you have date based content a great tool to use and again super easy to
implement.
There are other time based CBV for Today, Date, Week, Month, and Year. They
all do the same thing (generally) so I won’t review those.
",2019-11-27,cbv-dayarchiveview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/)
`DayArchiveView`
> > List of objects published on a given day.
## Attributes
There are six new attributes to review here … well really 3 new ones and then
a formatting attribute for each of these 3:
* day: The day to be viewed
* day_format: The format of the day …
",CBV - DayArchiveView,https://www.ryancheley.com/2019/11/27/cbv-dayarchiveview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/)
`DeleteView`
> > View for deleting an object retrieved with self.get*object(), with a *
response rendered by a template.
## Attributes
There are no new attributes, but 2 that we’ve seen are required: (1)
`queryset` or `model`; and (2) `success_url`
## Example
views.py
class myDeleteView(DeleteView):
queryset = Person.objects.all()
success_url = reverse_lazy('rango:list_view')
urls.py
path('delete_view/', views.myDeleteView.as_view(), name='delete_view'),
\.html
Below is just the form that would be needed to get the delete to work.
## Diagram
A visual representation of how `DeleteView` is derived can be seen here:

## Conclusion
As far as implementations, the ability to add a form to delete data is about
the easiest thing you can do in Django. It requires next to nothing in terms
of implementing. We now have step 4 of a CRUD app!
",2019-12-11,cbv-deleteview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/)
`DeleteView`
> > View for deleting an object retrieved with self.get*object(), with a *
response rendered by a template.
## Attributes
There are no new attributes, but 2 that we’ve seen are required: (1)
`queryset` or `model`; and (2) `success_url`
## Example
views.py
class myDeleteView(DeleteView …
",CBV - DeleteView,https://www.ryancheley.com/2019/12/11/cbv-deleteview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/)
`DetailView`
> > Render a ""detail"" view of an object.
>>
>> By default this is a model instance looked up from `self.queryset`, but the
view will support display of _any_ object by overriding `self.get_object()`.
There are 7 attributes for the `DetailView` that are derived from the
`SingleObjectMixin`. I’ll talk about five of them and the go over the ‘slug’
fields in their own section.
* context_object_name: similar to the `ListView` it allows you to give a more memorable name to the object in the template. You’ll want to use this if you want to have future developers (i.e. you) not hate you
* model: similar to the `ListView` except it only returns a single record instead of all records for the model based on a filter parameter passed via the `slug`
* pk_url_kwarg: you can set this to be something other than pk if you want … though I’m not sure why you’d want to
* query_pk_and_slug: The Django Docs have a pretty clear explanation of what it does
> > This attribute can help mitigate [insecure direct object
> reference](https://www.owasp.org/index.php/Top_10_2013-A4-Insecure_Direct_Object_References)
> attacks. When applications allow access to individual objects by a
> sequential primary key, an attacker could brute-force guess all URLs;
> thereby obtaining a list of all objects in the application. If users with
> access to individual objects should be prevented from obtaining this list,
> setting query _pk_ and*slug to True will help prevent the guessing of URLs
> as each URL will require two correct, non-sequential arguments. Simply using
> a unique slug may serve the same purpose, but this scheme allows you to have
> non-unique slugs. *
* queryset: used to return data to the view. It will supersede the value supplied for `model` if both are present
## The Slug Fields
There are two attributes that I want to talk about separately from the others:
* slug_field
* slug_url_kwarg
If neither `slug_field` nor `slug_url_kwarg` are set the the url must contain
``. The url in the template needs to include `o.id`
### views.py
There is nothing to show in the `views.py` file in this example
### urls.py
path('detail_view/', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.id %}
If `slug_field` is set but `slug_url_kwarg` is NOT set then the url can be
``. The url in the template needs to include `o.`
### views.py
class myDetailView(DetailView):
slug_field = 'first_name'
### urls.py
path('detail_view//', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.first_name %}
If `slug_field` is not set but `slug_url_kwarg` is set then you get an error.
Don’t do this one.
If both `slug_field` and `slug_url_kwarg` are set then the url must be
`` where value is what the parameters are set to. The url in the
template needs to include `o.`
### views.py
class myDetailView(DetailView):
slug_field = 'first_name'
slug_url_kwarg = 'first_name'
### urls.py
path('detail_view//', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.first_name %}
## Diagram
A visual representation of how `DetailView` is derived can be seen here:

## Conclusion
I think the most important part of the `DetailView` is to remember its
relationship to `ListView`. Changes you try to implement on the Class for
`DetailView` need to be incorporated into the template associated with the
`ListView` you have.
",2019-11-24,cbv-detailview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/)
`DetailView`
> > Render a ""detail"" view of an object.
>>
>> By default this is a model instance looked up from `self.queryset`, but the
view will support display of _any_ object by overriding `self.get_object()`.
There are 7 attributes for the `DetailView` that are derived from the …
",CBV - DetailView,https://www.ryancheley.com/2019/11/24/cbv-detailview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/)
`FormView`
> > A view for displaying a form and rendering a template response.
## Attributes
The only new attribute to review this time is `form_class`. That being said,
there are a few implementation details to cover
* form_class: takes a Form class and is used to render the form on the `html` template later on.
## Methods
Up to this point we haven’t really needed to override a method to get any of
the views to work. This time though, we need someway for the view to verify
that the data is valid and then save it somewhere.
* form_valid: used to verify that the data entered is valid and then saves to the database. Without this method your form doesn’t do anything
## Example
This example is a bit more than previous examples. A new file called
`forms.py` is used to define the form that will be used.
forms.py
from django.forms import ModelForm
from rango.models import Person
class PersonForm(ModelForm):
class Meta:
model = Person
exclude = [
'post_date',
]
views.py
class myFormView(FormView):
form_class = PersonForm
template_name = 'rango/person_form.html'
extra_context = {
'type': 'Form'
}
success_url = reverse_lazy('rango:list_view')
def form_valid(self, form):
person = Person.objects.create(
first_name=form.cleaned_data['first_name'],
last_name=form.cleaned_data['last_name'],
post_date=datetime.now(),
)
return super(myFormView, self).form_valid(form)
urls.py
path('form_view/', views.myFormView.as_view(), name='form_view'),
\.html
{{ type }} View
{% if type != 'Update' %}
## Diagram
A visual representation of how `FormView` is derived can be seen here:

## Conclusion
I really struggled with understanding _why_ you would want to implement
`FormView`. I found this explanation on
[Agiliq](https://www.agiliq.com/blog/2019/01/django-formview/) and it helped
me grok the why:
> > FormView should be used when you need a form on the page and want to
> perform certain action when a valid form is submitted. eg: Having a contact
> us form and sending an email on form submission.
>>
>> CreateView would probably be a better choice if you want to insert a model
instance in database on form submission.
While my example above works, it’s not the intended use of `FormView`. Really,
it’s just an implementation of `CreateView` using `FormView`
",2019-12-04,cbv-formview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/)
`FormView`
> > A view for displaying a form and rendering a template response.
## Attributes
The only new attribute to review this time is `form_class`. That being said,
there are a few implementation details to cover
* form_class: takes a Form class and is used to render the …
",CBV - FormView,https://www.ryancheley.com/2019/12/04/cbv-formview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/)
`ListView`:
> > Render some list of objects, set by `self.model` or `self.queryset`.
>>
>> `self.queryset` can actually be any iterable of items, not just a queryset.
There are 16 attributes for the `ListView` but only 2 types are required to
make the page return something other than a
[500](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#5xx_Server_errors)
error:
* Data
* Template Name
## Data Attributes
You have a choice of either using `Model` or `queryset` to specify **what**
data to return. Without it you get an error.
The `Model` attribute gives you less control but is easier to implement. If
you want to see ALL of the records of your model, just set
model = ModelName
However, if you want to have a bit more control over what is going to be
displayed you’ll want to use `queryset` which will allow you to add methods to
the specified model, ie `filter`, `order_by`.
queryset = ModelName.objects.filter(field_name='filter')
If you specify both `model` and `queryset` then `queryset` takes precedence.
## Template Name Attributes
You have a choice of using `template_name` or `template_name_suffix`. The
`template_name` allows you to directly control what template will be used. For
example, if you have a template called `list_view.html` you can specify it
directly in `template_name`.
`template_name_suffix` will calculate what the template name should be by
using the app name, model name, and appending the value set to the
`template_name_suffix`.
In pseudo code:
templates//_.html
For an app named `rango` and a model named `person` setting
`template_name_suffix` to `_test` would resolve to
templates/rango/person_test.html
## Other Attributes
If you want to return something interesting you’ll also need to specify
* allow_empty: The default for this is true which allows the page to render if there are no records. If you set this to `false` then returning no records will result in a 404 error
* context_object_name: allows you to give a more memorable name to the object in the template. You’ll want to use this if you want to have future developers (i.e. you) not hate you
* ordering: allows you to specify the order that the data will be returned in. The field specified must exist in the `model` or `queryset` that you’ve used
* page_kwarg: this indicates the name to use when going from page x to y; defaults to `name` but overriding it to something more sensible can be helpful for SEO. For example you can use `name` instead of `page` if you’ve got a page that has a bunch of names

* paginate_by: determines the maximum number of records to return on any page.
* paginate_orphans: number of items to add to the last page; this helps keep pages with singletons (or some other small number
* paginator_class: class that defines several of the attributes above. Don’t mess with this unless you have an actual reason to do so. Also … you’re not a special snowflake, there are literal dragons in down this road. Go back!
## Diagram
A visual representation of how `ListView` is derived can be seen here:

## Conclusion
The `ListView` CBV is a powerful and highly customizable tool that allows you
to display the data from a single model quite easily.
",2019-11-17,cbv-listview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/)
`ListView`:
> > Render some list of objects, set by `self.model` or `self.queryset`.
>>
>> `self.queryset` can actually be any iterable of items, not just a queryset.
There are 16 attributes for the `ListView` but only 2 types are required to
make the page return something …
",CBV - ListView,https://www.ryancheley.com/2019/11/17/cbv-listview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/)
`LoginView`
> > Display the login form and handle the login action.
## Attributes
* authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just account creation, i.e. account verification must be done as well. For details see [example](https://simpleisbetterthancomplex.com/tips/2016/08/12/django-tip-10-authentication-form-custom-login-policy.html) by Vitor Freitas for more details
* form_class: The form that will be used by the template created. Defaults to Django’s `AuthenticationForm`
* redirect_authenticated_user: If the user is logged in then when they attempt to go to your login page it will redirect them to the `LOGIN_REDIRECT_URL` configured in your `settings.py`
* redirect_field_name: similar idea to updating what the `next` field will be from the `DetailView`. If this is specified then you’ll most likely need to create a custom login template.
* template_name: The default value for this is `registration\login.html`, i.e. a file called `login.html` in the `registration` directory of the `templates` directory.
There are no required attributes for this view, which is nice because you can
just add `pass` to the view and you’re set (for the view anyway you still need
an html file).
You’ll also need to update `settings.py` to include a value for the
`LOGIN_REDIRECT_URL`.
### Note on redirect_field_name
Per the [Django
Documentation](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.decorators.login_required):
> > If the user isn’t logged in, redirect to settings.LOGIN*URL, passing the
> current absolute path in the query string. Example:
> /accounts/login/?next=/polls/3/. *
If `redirect_field_name` is set then the URL would be:
/accounts/login/?=/polls/3
Basically, you only use this if you have a pretty good reason.
## Example
views.py
class myLoginView(LoginView):
pass
urls.py
path('login_view/', views.myLoginView.as_view(), name='login_view'),
registration/login.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% endblock %}
settings.py
LOGIN_REDIRECT_URL = '//'
## Diagram
A visual representation of how `LoginView` is derived can be seen here:

## Conclusion
Really easy to implement right out of the box but allows some nice
customization. That being said, make those customizations IF you need to, not
just because you think you want to.
",2019-12-15,cbv-loginview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/)
`LoginView`
> > Display the login form and handle the login action.
## Attributes
* authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just …
",CBV - LoginView,https://www.ryancheley.com/2019/12/15/cbv-loginview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/)
`LogoutView`
> > Log out the user and display the 'You are logged out' message.
## Attributes
* next_page: redirects the user on logout.
* [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the given GET parameter is passed. 1
* template_name: defaults to `registration\logged_out.html`. Even if you don’t have a template the view does get rendered but it uses the default Django skin. You’ll want to create your own to allow the user to logout AND to keep the look and feel of the site.
## Example
views.py
class myLogoutView(LogoutView):
pass
urls.py
path('logout_view/', views.myLogoutView.as_view(), name='logout_view'),
registrationlogged_out.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% trans ""Logged out"" %}
{% endblock %}
## Diagram
A visual representation of how `LogoutView` is derived can be seen here:
Image Link from CCBV YUML goes here
## Conclusion
I’m not sure how it could be much easier to implement a logout page.
1. Per Django Docs ↩︎
",2019-12-15,cbv-logoutview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/)
`LogoutView`
> > Log out the user and display the 'You are logged out' message.
## Attributes
* next_page: redirects the user on logout.
* [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the …
",CBV - LogoutView,https://www.ryancheley.com/2019/12/15/cbv-logoutview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeDoneView/)
`PasswordChangeDoneView`
> > Render a template. Pass keyword arguments from the URLconf to the context.
## Attributes
* template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site.
* title: the default uses the function `gettext_lazy()` and passes the string ‘Password change successful’. The function `gettext_lazy()` will translate the text into the local language if a translation is available. I’d just keep the default on this.
## Example
views.py
class myPasswordChangeDoneView(PasswordChangeDoneView):
pass
urls.py
path('password_change_done_view/', views.myPasswordChangeDoneView.as_view(), name='password_change_done_view'),
password_change_done.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% block title %}
{{ title }}
{% endblock %}
{% trans ""Password changed"" %}
{% endblock %}
settings.py
LOGIN_URL = '//login_view/'
The above assumes that have this set up in your `urls.py`
## Special Notes
You need to set the `URL_LOGIN` value in your `settings.py`. It defaults to
`/accounts/login/`. If that path isn’t valid you’ll get a 404 error.
## Diagram
A visual representation of how `PasswordChangeDoneView` is derived can be seen
here:

## Conclusion
Again, not much to do here. Let Django do all of the heavy lifting, but be
mindful of the needed work in `settings.py` and the new template you’ll
need/want to create
",2019-12-25,cbv-passwordchangedoneview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeDoneView/)
`PasswordChangeDoneView`
> > Render a template. Pass keyword arguments from the URLconf to the context.
## Attributes
* template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site.
* title: the default uses …
",CBV - PasswordChangeDoneView,https://www.ryancheley.com/2019/12/25/cbv-passwordchangedoneview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/)
`PasswordChangeView`
> > A view for displaying a form and rendering a template response.
## Attributes
* form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm`
* success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this. The default is to use Django’s but unless you have a top level `urls.py` has the name of `password_change_done` you’ll get an error.
* title: defaults to ‘Password Change’ and is translated into local language
## Example
views.py
class myPasswordChangeView(PasswordChangeView):
success_url = reverse_lazy('rango:password_change_done_view')
urls.py
path('password_change_view/', views.myPasswordChangeView.as_view(), name='password_change_view'),
password_change_form.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% block title %}
{{ title }}
{% endblock %}
{% trans ""Password changed"" %}
{% endblock %}
## Diagram
A visual representation of how `PasswordChangeView` is derived can be seen
here:

## Conclusion
The only thing to keep in mind here is the success_url that will most likely
need to be set based on the application you’ve written. If you get an error
about not being able to use `reverse` to find your template, that’s the issue.
",2019-12-22,cbv-passwordchangeview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/)
`PasswordChangeView`
> > A view for displaying a form and rendering a template response.
## Attributes
* form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm`
* success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this …
",CBV - PasswordChangeView,https://www.ryancheley.com/2019/12/22/cbv-passwordchangeview/
ryan,technology,"From [Classy Class Based
View](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/RedirectView/)
the `RedirectView` will
> > Provide a redirect on any GET request.
It is an extension of `View` and has 5 attributes:
* http_method_names (from `View`)
* pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used.
* permanent: a flag to determine if the redirect is permanent or not. If set to `True`, then the [HTTP Status Code](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#3xx_Redirection) [301](https://en.wikipedia.org/wiki/HTTP_301) is returned. If set to `False` the [302](https://en.wikipedia.org/wiki/HTTP_302) is returned
* query_string: If `True` then it will pass along the query string from the RedirectView. If it’s `False` it won’t. If this is set to `True` and neither `pattern\_name` nor `url` are set then nothing will be passed to the `RedirectView`
* url: Where the Redirect should point. It will take precedence over the patter_name so you should only `url` or `pattern\_name` but not both. This will need to be an absolute url, not a relative one, otherwise you may get a [404](https://en.wikipedia.org/wiki/HTTP_404) error
The example below will give a `301` status code:
class myRedirectView(RedirectView):
pattern_name = 'rango:template_view'
permanent = True
query_string = True
While this would be a `302` status code:
class myRedirectView(RedirectView):
pattern_name = 'rango:template_view'
permanent = False
query_string = True
## Methods
The method `get\_redirect\_url` allows you to perform actions when the
redirect is called. From the [Django
Docs](https://docs.djangoproject.com/en/2.2/ref/class-based-
views/base/#redirectview) the example given is increasing a counter on an
Article Read value.
## Diagram
A visual representation of how `RedirectView` derives from `View` 2

## Conclusion
In general, given the power of the url mapping in Django I’m not sure why you
would need to use a the Redirect View. From [Real
Python](https://docs.djangoproject.com/en/2.2/ref/class-based-
views/base/#redirectview) they concur, stating:
> > As you can see, the class-based approach does not provide any obvious
> benefit while adding some hidden complexity. That raises the question:
> **when should you use RedirectView?**
>>
>> If you want to add a redirect directly in your urls.py, using RedirectView
makes sense. But if you find yourself overwriting get _redirect_ url, a
function-based view might be easier to understand and more flexible for future
enhancements.
1. From the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-based-views/base/ ↩︎
2. Original Source from Classy Class Based Views ↩︎
",2019-11-10,cbv-redirectview,"From [Classy Class Based
View](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/RedirectView/)
the `RedirectView` will
> > Provide a redirect on any GET request.
It is an extension of `View` and has 5 attributes:
* http_method_names (from `View`)
* pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used.
* permanent: a …
",CBV - RedirectView,https://www.ryancheley.com/2019/11/10/cbv-redirectview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/)
the `TemplateView` will
> > Render a template. Pass keyword arguments from the URLconf to the context.
It is an extended version of the `View` CBV with the the `ContextMixin` and
the `TemplateResponseMixin` added to it.
It has several attributes that can be set
* content_type: will allow you to define the MIME type that the page will return. The default is `DEFAULT\_CONTENT\_TYPE` but can be overridden with this attribute.
* extra_context: this can be used as a keyword argument in the `as\_view()` but not in the class of the CBV. Adding it there will do nothing
* http_method_name: derived from `View` and has the same definition
* response_classes: The response class to be returned by render_to_response method it defaults to a TemplateResponse. See below for further discussion
* template_engine: can be used to specify which template engine to use IF you have configured the use of multiple template engines in your `settings.py` file. See the [Usage](https://docs.djangoproject.com/en/2.2/topics/templates/#usage) section of the Django Documentation on Templates
* template_name: this attribute is required IF the method `get\_template\_names()` is not used.
## More on `response_class`
This confuses the ever living crap out of me. The best (only) explanation I
have found is by GitHub user `spapas` in his article [Django non-HTML
responses](https://spapas.github.io/2014/09/15/django-non-html-
responses/#rendering-to-non-html):
> > From the previous discussion we can conclude that if your non-HTML
> response needs a template then you just need to create a subclass of
> TemplateResponse and assign it to the response _class attribute (and also
> change the content_ type attribute). On the other hand, if your non-HTML
> respond does not need a template to be rendered then you have to override
> render _to_ response completely (since the template parameter does not need
> to be passed now) and either define a subclass of HttpResponse or do the
> rendering in the render _to_ response.
Basically, if you ever want to use a non-HTML template you’d set this
attribute, but it seems available mostly as a ‘just-in-case’ and not something
that’s used every day.
My advise … just leave it as is.
## When to use the `get` method
An answer which makes sense to me that I found on
[StackOverflow](https://stackoverflow.com/questions/35824904/django-view-get-
context-data-vs-get) was (slightly modified to make it more understandable)
> > if you need to have data available every time, use get_context_data(). If
> you need the data only for a specific request method (eg. in get), then put
> it in get.
## When to use the `get_template_name` method
This method allows you to easily change a template being used based on values
passed through GET.
This can be helpful if you want to have one template for a super user and
another template for a basic user. This helps to keep business logic out of
the template and in the view where it belongs.
This can also be useful if you want to specify several possible templates to
use. A list is passed and Django will work through that list from the first
element to the last until it finds a template that exists and render it.
If you don’t specify template_name you have to use this method.
## When to use the `get_context_data` method
See above in the section When to use the `get` method
## Diagram
A visual representation of how `TemplateView` derives from `View` 1

## Conclusion
If you want to roll your own CBV because you have a super specific use case,
starting at the `TemplateView` is going to be a good place to start. However,
you may find that there is already a view that is going to do what you need it
to. Writing your own custom implementation of `TemplateView` may be a waste of
time **IF** you haven’t already verified that what you need isn’t already
there.
1. Original Source from Classy Class Based Views ↩︎
",2019-11-03,cbv-template-view,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/)
the `TemplateView` will
> > Render a template. Pass keyword arguments from the URLconf to the context.
It is an extended version of the `View` CBV with the the `ContextMixin` and
the `TemplateResponseMixin` added to it.
It has several attributes that can be set
* content_type: will allow …
",CBV - Template View,https://www.ryancheley.com/2019/11/03/cbv-template-view/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/UpdateView/)
`UpdateView`
> > View for updating an object, with a response rendered by a template.
## Attributes
Two attributes are required to get the template to render. We’ve seen
`queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a
brief refresher
* fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields
* success_url: you’ll want to specify this after the record has been updated so that you know the update was made.
## Example
views.py
class myUpdateView(UpdateView):
queryset = Person.objects.all()
fields = '__all__'
extra_context = {
'type': 'Update'
}
success_url = reverse_lazy('rango:list_view')
urls.py
path('update_view/', views.myUpdateView.as_view(), name='update_view'),
\.html
{% block content %}
{{ type }} View
{% if type == 'Create' %}
{% endblock %}
## Diagram
A visual representation of how `UpdateView` is derived can be seen here:

## Conclusion
A simple way to implement a form to update data in a model. Step 3 for a CR
**U** D app is now complete!
",2019-12-08,cbv-updateview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/UpdateView/)
`UpdateView`
> > View for updating an object, with a response rendered by a template.
## Attributes
Two attributes are required to get the template to render. We’ve seen
`queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a
brief refresher
* fields: specifies what fields from the …
",CBV - UpdateView,https://www.ryancheley.com/2019/12/08/cbv-updateview/
ryan,technology,"`View` is the ancestor of ALL Django CBV. From the great site [Classy Class
Based Views](http://ccbv.co.uk), they are described as
> > Intentionally simple parent class for all views. Only implements dispatch-
> by-method and simple sanity checking.
This is no joke. The `View` class has almost nothing to it, but it’s a solid
foundation for everything else that will be done.
Its implementation has just one attribute `http_method_names` which is a list
that allows you to specify what http verbs are allowed.
Other than that, there’s really not much to it. You just write a simple
method, something like this:
def get(self, _):
return HttpResponse('My Content')
All that gets returned to the page is a simple HTML. You can specify the
`content_type` if you just want to return JSON or plain text but defining the
content_type like this:
def get(self, _):
return HttpResponse('My Content', content_type='text plain')
You can also make the text that is displayed be based on a variable defined in
the class.
First, you need to define the variable
content = 'This is a {View} template and is not used for much of anything but '
'allowing extensions of it for other Views'
And then you can do something like this:
def get(self, _):
return HttpResponse(self.content, content_type='text/plain')
Also, as mentioned above you can specify the allowable methods via the
attribute `http_method_names`.
The following HTTP methods are allowed:
* get
* post
* put
* patch
* delete
* head
* options
* trace
By default all are allowed.
If we put all of the pieces together we can see that a really simple `View`
CBV would look something like this:
class myView(View):
content = 'This is a {View} template and is not used for much of anything but '
'allowing extensions of it for other Views'
http_method_names = ['get']
def get(self, _):
return HttpResponse(self.content, content_type='text/plain')
This `View` will return `content` to the page rendered as plain text. This CBV
is also limited to only allowing `get` requests.
Here’s what it looks like in the browser:

## Conclusion
`View` doesn’t do much, but it’s the case for everything else, so
understanding it is going to be important.
",2019-10-27,cbv-view,"`View` is the ancestor of ALL Django CBV. From the great site [Classy Class
Based Views](http://ccbv.co.uk), they are described as
> > Intentionally simple parent class for all views. Only implements dispatch-
> by-method and simple sanity checking.
This is no joke. The `View` class has almost nothing to it, but it’s a …
",CBV - View,https://www.ryancheley.com/2019/10/27/cbv-view/
ryan,technology,"As I’ve written about [previously](/my-first-project-after-completing-
the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a
pretty good spot (you should totally check it out over at
[StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be
a good time to learn a bit more about some of the ways that I’m rendering the
pages.
I’m using Class Based Views (CBV) and I realized that I really didn’t
[grok](https://en.wikipedia.org/wiki/Grok) how they worked. I wanted to change
that.
I’ll be working on a series where I deep dive into the CBV and work them from
several angles and try to get them to do all of the things that they are
capable of.
The first place I’d suggest anyone start to get a good idea of CBV, and the
idea of Mixins would be [SpaPas’ GitHub
Page](https://spapas.github.io/2018/03/19/comprehensive-django-cbv-guide/)
where he does a really good job of covering many pieces of the CBV. It’s a
great resource!
This is just the intro to this series and my hope is that I’ll publish one of
these pieces each week for the next several months as I work my way through
all of the various CBV that are available.
",2019-10-27,class-based-views,"As I’ve written about [previously](/my-first-project-after-completing-
the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a
pretty good spot (you should totally check it out over at
[StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be
a good time to learn a bit more about some of the ways that …
",Class Based Views,https://www.ryancheley.com/2019/10/27/class-based-views/
ryan,technology,"I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung
around for the
sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was
particularly interested in working on open tickets related to the ORM. It so
happened that [Simon Charette](https://github.com/charettes) was at Django Con
and was able to meet with several of us to talk through the inner working of
the ORM.
With Simon helping to guide us, I took a stab at an open ticket and settled on
[10070](https://code.djangoproject.com/ticket/10070). After reviewing it on my
own, and then with Simon, it looked like it wasn't really a bug anymore, and
so we agreed that I could mark it as
[done](https://code.djangoproject.com/ticket/10070#comment:22).
Kind of anticlimactic given what I was **hoping** to achieve, but a closed
ticket is a closed ticket! And so I [tweeted out my
accomplishment](https://twitter.com/ryancheley/status/1583206004744867841) for
all the world to see.
A few weeks later though, a
[comment](https://code.djangoproject.com/ticket/10070#comment:22) was added
that it actually was still a bug and it was reopened.
I was disappointed ... but I now had a chance to actually fix a real bug! [I
started in earnest](https://github.com/ryancheley/public-
notes/issues/1#issue-1428819941).
A suggestion / pattern for working through learning new things that [Simon
Willison](https://simonwillison.net) had mentioned was having a `public-notes`
repo on GitHub. He's had some great stuff that he's worked through that you
can see [here](https://github.com/simonw/public-notes/issues?q=is%3Aissue).
Using this as a starting point, I decided to [walk through what I learned
while working on this open ticket](https://github.com/ryancheley/public-
notes/issues/1).
Over the course of 10 days I had a 38 comment 'conversation with myself' and
it was **super** helpful!
A couple of key takeaways from working on this issue:
* [Carlton Gibson](https://github.com/carltongibson) [said](https://overcast.fm/+QkIrhujD0/21:00) essentially once you start working a ticket from [Trac](https://code.djangoproject.com/), you are the world's foremost export on that ticket ... and he's right!
* ... But, you're not working the ticket alone! During the course of my work on the issue I had help from [Simon Charette](https://github.com/charettes), [Mariusz Felisiak](https://github.com/felixxm), [Nick Pope](https://github.com/ngnpope), and [Shai Berger](https://github.com/shaib)
* The ORM can seem big and scary ... but remember, it's _just_ Python
I think that each of these lesson learned is important for anyone thinking of
contributing to Django (or other open source projects).
That being said, the last point is one that I think can't be emphasized
enough.
The ORM has a reputation for being this big black box that only 'really smart
people' can understand and contribute to. But, it really is _just_ Python.
If you're using Django, you know (more likely than not) a little bit of
Python. Also, if you're using Django, and have written **any** models, you
have a conceptual understanding of what SQL is trying to do (well enough I
would argue) that you can get in there AND make sense of what is happening.
And if you know a little bit of Python a great way to learn more is to get
into a project like Django and try to fix a bug.
[My initial solution](https://code.djangoproject.com/ticket/10070#comment:27)
isn't [the final one that got
merged](https://github.com/django/django/pull/16243) ... it was a
collaboration with 4 people, 2 of whom I've never met in real life, and the
other 2 I only just met at DjangoCon US a few weeks before.
While working through this I learned just as much from the feedback on my code
as I did from trying to solve the problem with my own code.
All of this is to say, contributing to open source can be hard, it can be
scary, but honestly, I can't think of a better place to start than Django, and
there are [lots of places to
start](https://code.djangoproject.com/query?owner=nobody&status=assigned&status=new&col=id&col=summary&col=owner&col=status&col=component&col=type&col=version&desc=1&order=id).
And for those of you feeling a bit adventurous, there are plenty of
[ORM](https://code.djangoproject.com/query?status=assigned&status=new&owner=nobody&component=Database+layer+\(models%2C+ORM\)&col=id&col=summary&col=status&col=component&col=owner&col=type&col=version&desc=1&order=id)
tickets just waiting for you to try and fix them!
",2022-11-12,contributing-to-django,"I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung
around for the
sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was
particularly interested in working on open tickets related to the ORM. It so
happened that [Simon Charette](https://github.com/charettes) was at Django Con
and was able to meet with several of us to talk through …
",Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug,https://www.ryancheley.com/2022/11/12/contributing-to-django/
ryan,technology,"Last Saturday (July 3rd) while on vacation, I dubbed it “Security update
Saturday”. I took the opportunity to review all of the GitHub bot alerts about
out of date packages, and make the updates I needed to.
This included updated `django-sql-dashboard` to [version
1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which
I was really excited about doing. It included two things I was eager to see:
1. Implemented a new column cog menu, with options for sorting, counting distinct items and counting by values. [#57](https://github.com/simonw/django-sql-dashboard/issues/57)
2. Admin change list view now only shows dashboards the user has permission to edit. Thanks, [Atul Varma](https://github.com/atverma). [#130](https://github.com/simonw/django-sql-dashboard/issues/130)
I made the updates on my site StadiaTracker.com using my normal workflow:
1. Make the change locally on my MacBook Pro
2. Run the tests
3. Push to UAT
4. Push to PROD
The next day, on July 4th, I got the following error message via my error
logging:
Internal Server Error: /dashboard/games-seen-in-person/
ProgrammingError at /dashboard/games-seen-in-person/
could not find array type for data type information_schema.sql_identifier
So I copied the [url](https://stadiatracker.com/dashboard/games-seen-in-
person/) `/dashboard/games-seen-in-person/` to see if I could replicate the
issue as an authenticated user and sure enough, I got a 500 Server error.
## Troubleshooting process
The first thing I did was to fire up the local version and check the url
there. Oddly enough, it worked without issue.
OK … well that’s odd. What are the differences between the local version and
the uat / prod version?
The local version is running on macOS 10.15.7 while the uat / prod versions
are running Ubuntu 18.04. That could be one source of the issue.
The local version is running Postgres 13.2 while the uat / prod versions are
running Postgres 10.17
OK, two differences. Since the error is `could not find array type for data
type information_schema.sql_identifier` I’m going to start with taking a look
at the differences on the Postgres versions.
First, I looked at the [Change Log](https://github.com/simonw/django-sql-
dashboard/releases) to see what changed between version 0.16 and version 1.0.
Nothing jumped out at me, so I looked at the
[diff](https://github.com/simonw/django-sql-dashboard/compare/acb3752..b8835)
between several files between the two versions looking specifically for
`information_schema.sql_identifier` which didn’t bring up anything.
Next I checked for either `information_schema` or `sql_identifier` and found a
chance in the `views.py` file. On line 151 (version 0.16) this change was
made:
string_agg(column_name, ', ' order by ordinal_position) as columns
to this:
array_to_json(array_agg(column_name order by ordinal_position)) as columns
Next, I extracted the entire SQL statement from the `views.py` file to run in
Postgres on the UAT server
with visible_tables as (
select table_name
from information_schema.tables
where table_schema = 'public'
order by table_name
),
reserved_keywords as (
select word
from pg_get_keywords()
where catcode = 'R'
)
select
information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from
information_schema.columns
join
visible_tables on
information_schema.columns.table_name = visible_tables.table_name
where
information_schema.columns.table_schema = 'public'
group by
information_schema.columns.table_name
order by
information_schema.columns.table_name
Running this generated the same error I was seeing from the logs!
Next, I picked apart the various select statements, testing each one to see
what failed, and ended on this one:
select information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from information_schema.columns
Which generated the same error message. Great!
In order to determine how to proceed next I googled `sql_identifier` to see
what it was. Turns out it’s a field type in Postgres! (I’ve been working in
MSSQL for more than 10 years and as far as I know, this isn’t a field type
over there, so I learned something)
Further, there were [changes made to that field type in Postgres
12](https://bucardo.org/postgres_all_versions#version_12.0)!
OK, since there were changes made to that afield type in Postgres 12, I’ll
probably need to cast the field to another field type that won’t fail.
That led me to try this:
select information_schema.columns.table_name,
array_to_json(array_agg(cast(column_name as text) order by ordinal_position)) as columns
from information_schema.columns
Which returned a value without error!
## Submitting the updated code
With the solution in hand, I read the [Contribution
Guide](https://github.com/simonw/django-sql-
dashboard/blob/main/docs/contributing.md) and submitting my patch. And the
most awesome part? Within less than an hour Simon Willison (the project’s
maintainer) had replied back and merged by code!
And then, the icing on the cake was getting a [shout out in a post that Simon
wrote](https://simonwillison.net/2021/Jul/6/django-sql-dashboard/) up about
the update that I submitted!
Holy smokes that was sooo cool.
I love solving problems, and I love writing code, so this kind of stuff just
really makes my day.
Now, I’ve contributed to an open source project (that makes 3 now!) and the
issue with the `/dashboard/` has been fixed.
All
",2021-07-09,contributing-to-django-sql-dashboard,"Last Saturday (July 3rd) while on vacation, I dubbed it “Security update
Saturday”. I took the opportunity to review all of the GitHub bot alerts about
out of date packages, and make the updates I needed to.
This included updated `django-sql-dashboard` to [version
1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which
I was really excited …
",Contributing to django-sql-dashboard,https://www.ryancheley.com/2021/07/09/contributing-to-django-sql-dashboard/
ryan,technology,"I read about a project called
[Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was
[tweeted about by Jeff
Triplet](https://twitter.com/webology/status/1414233648534933509)
I checked it out and it seemed interesting. I decided to use it on my
[simplest Django project](https://doestatisjrhaveanerrortoday.com) just to
give it a test drive running this command:
tryceratops .
and got this result:
Done processing! 🦖✨
Processed 16 files
Found 0 violations
Failed to process 1 files
Skipped 2340 files
This is nice, but what is the file that failed to process?
This left me with two options:
1. Complain that this awesome tool created by someone didn't do the thing I thought it needed to do
OR
1. Submit an issue to the project and offer to help.
I went with option 2 😀
My initial commit was made in a pretty naive way. It did the job, but not in
the best way for maintainability. I had a really great exchange with the
maintainer [Guilherme Latrova](https://github.com/guilatrova) about the change
that was made and he helped to direct me in a different direction.
The biggest thing I learned while working on this project (for Python at
least) was the `logging` library. Specifically I learned how to add:
* a formatter
* a handler
* a logger
For my change, I added a simple format with a verbose handler in a custom
logger. It looked something like this:
The formatter:
""simple"": {
""format"": ""%(message)s"",
},
The handler:
""verbose_output"": {
""class"": ""logging.StreamHandler"",
""level"": ""DEBUG"",
""formatter"": ""simple"",
""stream"": ""ext://sys.stdout"",
},
The logger:
""loggers"": {
""tryceratops"": {
""level"": ""INFO"",
""handlers"": [
""verbose_output"",
],
},
},
This allows the `verbose` flag to output the message to Standard Out and give
and `INFO` level of detail.
Because of what I learned, I've started using the [logging
library](https://docs.python.org/3/library/logging.html) on some of my work
projects where I had tried to roll my own logging tool. I should have known
there was a logging tool in the Standard Library BEFORE I tried to roll me own
🤦🏻♂️
The other thing I (kind of) learned how to do was to squash my commits. I had
never had a need (or desire?) to squash commits before, but the commit message
is what Guilherme uses to generate the change log. So, with his guidance and
help I tried my best to squash those commits. Although in the end he had to do
it (still not entiredly sure what I did wrong) I was exposed to the idea of
squashing commits and why they might be done. A win-win!
The best part about this entire experience was getting to work with Guilherme
Latrova. He was super helpful and patient and had great advice without telling
me what to do. The more I work within the Python ecosystem the more I'm just
blown away by just how friendly and helpful everyone is and it's what make me
want to do these kinds of projects.
If you haven't had a chance to work on an open source project, I highly
recommend it. It's a great chance to learn and to meet new people.
",2021-08-07,contributing-to-tryceratops,"I read about a project called
[Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was
[tweeted about by Jeff
Triplet](https://twitter.com/webology/status/1414233648534933509)
I checked it out and it seemed interesting. I decided to use it on my
[simplest Django project](https://doestatisjrhaveanerrortoday.com) just to
give it a test drive running this command:
tryceratops .
and got this result …
",Contributing to Tryceratops,https://www.ryancheley.com/2021/08/07/contributing-to-tryceratops/
ryan,technology,"Creating meaningful, long #hastags can be a pain in the butt.
There you are, writing up a witty tweet or making that perfect caption for
your instagram pic and you realize that you have a fantastic idea for a hash
tag that is more of a sentence than a single word.
You proceed to write it out and unleash your masterpiece to the world and just
as you hit the submit button you notice that you have a typo, or the wrong
spelling of a word and #ohcrap you need to delete and retweet!
That lead me to write a [Drafts](https://getdrafts.com) Action to take care of
that.
I’ll leave [others to write about the virtues of
Drafts](https://www.macstories.net/reviews/drafts-5-the-macstories-review/),
but it’s fantastic.
The Action I created has two steps: (1) to run some JavaScript and (2) to copy
the contents of the draft to the Clipboard. You can get my action
[here](https://actions.getdrafts.com/a/1Uo).
Here’s the JavaScript that I used to take a big long sentence and turn it into
a social media worthy hashtag
var contents = draft.content;
var newContents = ""#"";
editor.setText(newContents+contents.replace(/ /g, """").toLowerCase());
Super simple, but holy crap does it help!
",2019-03-30,creating-hastags-for-social-media-with-a-drafts-action,"Creating meaningful, long #hastags can be a pain in the butt.
There you are, writing up a witty tweet or making that perfect caption for
your instagram pic and you realize that you have a fantastic idea for a hash
tag that is more of a sentence than a single …
",Creating Hastags for Social Media with a Drafts Action,https://www.ryancheley.com/2019/03/30/creating-hastags-for-social-media-with-a-drafts-action/
ryan,technology,"I’ve mentioned before that I have been working on getting the hummingbird
video upload automated.
Each time I thought I had it, and each time I was wrong.
For some reason I could run it from the command line without issue, but when
the cronjob would try and run it ... nothing.
Turns out, it was running, it just wasn’t doing anything. And that was my
fault.
The file I had setup in cronjob was called `run_scrip.sh`
At first I was confused because the script was suppose to be writing out to a
log file all of it’s activities. But it didn’t appear to.
Then I noticed that the log.txt file it was writing was in the main `\``
directory. That should have been my first clue.
I kept trying to get the script to run, but suddenly, in a blaze of glory,
realized that it **was** running, it just wasn’t doing anything.
And it wasn’t doing anything for the same reason that the log file was being
written to the `\`` directory.
All of the paths were relative instead of absolute, so when the script ran the
command `./create_mp4.sh` it looks for that script in the home directory,
didn’t find it, and moved on.
The fix was simple enough, just add absolute paths and we’re golden.
That means my `run_script.sh` goes from this:
# Create the script that will be run
./create_script.sh
echo ""Create Shell Script: $(date)"" >> log.txt
# make the script that was just created executable
chmod +x /home/pi/Documents/python_projects/create_mp4.sh
# Create the script to create the mp4 file
/home/pi/Documents/python_projects/create_mp4.sh
echo ""Create MP4 Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# upload video to YouTube.com
/home/pi/Documents/python_projects/upload.sh
echo ""Uploaded Video to YouTube.com: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# Next we remove the video files locally
rm /home/pi/Documents/python_projects/*.h264
echo ""removed h264 files: $(date)"" >> /home/pi/Documents/python_projects/log.txt
rm /home/pi/Documents/python_projects/*.mp4
echo ""removed mp4 file: $(date)"" >> /home/pi/Documents/python_projects/log.txt
To this:
# change to the directory with all of the files
cd /home/pi/Documents/python_projects/
# Create the script that will be run
/home/pi/Documents/python_projects/create_script.sh
echo ""Create Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# make the script that was just created executable
chmod +x /home/pi/Documents/python_projects/create_mp4.sh
# Create the script to create the mp4 file
/home/pi/Documents/python_projects/create_mp4.sh
echo ""Create MP4 Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# upload video to YouTube.com
/home/pi/Documents/python_projects/upload.sh
echo ""Uploaded Video to YouTube.com: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# Next we remove the video files locally
rm /home/pi/Documents/python_projects/*.h264
echo ""removed h264 files: $(date)"" >> /home/pi/Documents/python_projects/log.txt
rm /home/pi/Documents/python_projects/*.mp4
echo ""removed mp4 file: $(date)"" >> /home/pi/Documents/python_projects/log.txt
I made this change and then started getting an error about not being able to
access a `json` file necessary for the upload to
[YouTube](https://www.youtube.com). Sigh.
Then while searching for what directory the cronjob was running from I found
[this very simple](https://unix.stackexchange.com/questions/38951/what-is-the-
working-directory-when-cron-executes-a-job) idea. The response was, why not
just change it to the directory you want. 🤦♂️
I added the `cd` to the top of the file:
# change to the directory with all of the files
cd /home/pi/Documents/python_projects/
Anyway, now it works. Finally!
Tomorrow will be the first time (unless of course something else goes wrong)
that The entire process will be automated. Super pumped!
",2018-04-10,cronjob-finally,"I’ve mentioned before that I have been working on getting the hummingbird
video upload automated.
Each time I thought I had it, and each time I was wrong.
For some reason I could run it from the command line without issue, but when
the cronjob would try and run …
",Cronjob ... Finally,https://www.ryancheley.com/2018/04/10/cronjob-finally/
ryan,technology,"After **days** of trying to figure this out, I finally got the video to upload
via a cronjob.
There were 2 issues.
## Issue the first
Finally found the issue. [Original script from YouTube developers
guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had
this:
CLIENT_SECRETS_FILE = ""client_secrets.json""
And then a couple of lines later, this:
% os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
When `crontab` would run the script it would run from a path that wasn’t where
the `CLIENT_SECRETS_FILE` file was and so a message would be displayed:
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
What I needed to do was to update the `CLIENT_SECRETS_FILE` to be the whole
path so that it could always find the file.
A simple change:
CLIENT_SECRETS_FILE = os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
## Issue the second
When the `create_mp4.sh` script would run it was reading all of the `h264`
files from the directory where they lived **BUT** they were attempting to
output the `mp4` file to `/` which it didn’t have permission to write to.
This was failing silently (I’m still not sure how I could have caught the
error). Since there was no `mp4` file to upload that script was failing
(though it was true that the location of the `CLIENT_SECRETS_FILE` was an
issue).
What I needed to do was change the `create_mp4.sh` file so that when the
MP4Box command output the `mp4` file to the proper directory. The script went
from this:
(echo '#!/bin/sh'; echo -n ""MP4Box""; array=($(ls ~/Documents/python_projects/*.h264)); for index in ${!array[@]}; do if [ ""$index"" -eq 0 ]; then echo -n "" -add ${array[index]}""; else echo -n "" -cat ${array[index]}""; fi; done; echo -n "" hummingbird.mp4"") > create_mp4.sh
To this:
(echo '#!/bin/sh'; echo -n ""MP4Box""; array=($(ls ~/Documents/python_projects/*.h264)); for index in ${!array[@]}; do if [ ""$index"" -eq 0 ]; then echo -n "" -add ${array[index]}""; else echo -n "" -cat ${array[index]}""; fi; done; echo -n "" /home/pi/Documents/python_projects/hummingbird.mp4"") > /home/pi/Documents/python_projects/create_mp4.sh
The last bit `/home/pi/Documents/python_projects/create_mp4.sh` may not be
_necessary_ but I’m not taking any chances.
The [video posted tonight](https://www.youtube.com/watch?v=OaRiW1aFk9k) is the
first one that was completely automatic!
Now … if I could just figure out how to automatically fill up my hummingbird
feeder.
",2018-04-20,cronjob-redux,"After **days** of trying to figure this out, I finally got the video to upload
via a cronjob.
There were 2 issues.
## Issue the first
Finally found the issue. [Original script from YouTube developers
guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had
this:
CLIENT_SECRETS_FILE = ""client_secrets.json""
And then a couple of lines later, this:
% os.path …
",Cronjob Redux,https://www.ryancheley.com/2018/04/20/cronjob-redux/
ryan,technology,"[Dr Drang has posted on Daylight Savings in the
past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a
recent [post](http://leancrew.com/all-this/2018/03/one-table-following-
another/) he critiqued (rightly so) the data presentation by a journalist at
the Washington Post on Daylight Savings, and that got me thinking.
In the post he generated a chart showing both the total number of daylight
hours and the sunrise / sunset times in Chicago. However, initially he didn’t
post the code on how he generated it. The next day, in a follow up
[post](http://leancrew.com/all-this/2018/03/the-sunrise-plot/), he did and
that **really** got my thinking.
I wonder what the chart would look like for cities up and down the west coast
(say from San Diego, CA to Seattle WA)?
Drang’s post had all of the code necessary to generate the graph, but for the
data munging, he indicated:
> > If I were going to do this sort of thing on a regular basis, I’d write a
> script to handle this editing, but for a one-off I just did it “by hand.”
Doing it by hand wasn’t going to work for me if I was going to do several
cities and so I needed to write a parser for the source of the data ([The US
Naval Observatory](http://aa.usno.navy.mil)).
The entire script is on my GitHub [sunrise
_sunset_](https://github.com/ryancheley/sunrise_sunset) repo. I won’t go into
the nitty gritty details, but I will call out a couple of things that I
discovered during the development process.
Writing a parser is hard. Like _really_ hard. Each time I thought I had it, I
didn’t. I was finally able to get the parser to work o cities with `01`,
`29`,`30`, or `31` in their longitude / latitude combinations.
I generated the same graph as Dr. Drang for the following cities:
* Phoenix, AZ
* Eugene, OR
* Portland
* Salem, OR
* Seaside, OR
* Eureka, CA
* Indio, CA
* Long Beach, CA
* Monterey, CA
* San Diego, CA
* San Francisco, CA
* San Luis Obispo, CA
* Ventura, CA
* Ferndale, WA
* Olympia, WA
* Seattle, WA
Why did I pick a city in Arizona? They don’t do Daylight Savings and I wanted
to have a comparison of what it’s like for them!
The charts in latitude order (from south to north) are below:
San Diego

Phoenix

Indio

Long Beach

Ventura

San Luis Obispo

Monterey

San Francisco

Eureka

Eugene

Salem

Portland

Seaside

Olympia

Seattle

Ferndale

While these images do show the different impact of Daylight Savings, I think
the images are more compelling when shown as a GIF:

We see just how different the impacts of DST are on each city depending on
their latitude.
One of [Dr. Drang’s main points in support of
DST](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/) is:
> > If, by the way, you think the solution is to stay on DST throughout the
> year, I can only tell you that we tried that back in the 70s and it didn’t
> turn out well. Sunrise here in Chicago was after 8:00 am, which put school
> children out on the street at bus stops before dawn in the dead of winter.
> It was the same on the East Coast. Nobody liked that.
I think that comment says more about our school system and less about the need
for DST.
For this whole argument I’m way more on the side of CGP Grey who does a [great
job of explaining what Day Lights Time
is](https://www.youtube.com/watch?v=84aWtseb2-4).
I think we may want to start looking at a Universal Planetary time (say UTC)
and base all activities on that **regardless** of where you are in the world.
The only reason 5am _seems_ early (to some people) is because we’ve
collectively decided that 5am (depending on the time of the year) is either
**WAY** before sunrise or just a bit before sunrise, but really it’s just a
number.
If we used UTC in California (where I’m at) 5am would we 12pm. Normally 12pm
would be lunch time, but that’s only a convention that we have constructed. It
could just as easily be the crack of dawn as it could be lunch time.
Do I think a conversion like this will ever happen? No. I just really hope
that at some point in the distant future when aliens finally come and visit
us, we aren’t late (or them early) because we have such a wacky time system
here.
",2018-03-26,daylight-savings-time,"[Dr Drang has posted on Daylight Savings in the
past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a
recent [post](http://leancrew.com/all-this/2018/03/one-table-following-
another/) he critiqued (rightly so) the data presentation by a journalist at
the Washington Post on Daylight Savings, and that got me thinking.
In the post he generated a chart showing both the total number of …
",Daylight Savings Time,https://www.ryancheley.com/2018/03/26/daylight-savings-time/
ryan,technology,"Normally when I start a new Django project I’ll use the PyCharm setup wizard,
but recently I wanted to try out VS Code for a Django project and was super
stumped when I would get a message like this:
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha512
Here are the steps I was using to get started
From a directory I wanted to create the project I would set up my virtual
environment
python3 -m venv venv
And then activate it
source venv/bin/activate
Next, I would install Django
pip install django
Next, using the `startproject` command per the
[docs](https://docs.djangoproject.com/en/3.2/ref/django-admin/#startproject
""Start a new Django Project"") I would
django-admin startproject my_great_project .
And get the error message above 🤦🏻♂️
The strangest part about the error message is that it references Python2.7
everywhere … which is odd because I’m in a Python3 virtual environment.
I did a `pip list` and got:
Package Version
---------- -------
asgiref 3.3.4
Django 3.2.4
pip 21.1.2
pytz 2021.1
setuptools 49.2.1
sqlparse 0.4.1
OK … so everything is in my virtual environment. Let’s drop into the REPL and
see what’s going on

Well, that looks to be OK.
Next, I checked the contents of my directory using `tree -L 2`
├── manage.py
├── my_great_project
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
Yep … that looks good too.
OK, let’s go look at the installed packages for Python 2.7 then. On macOS
they’re installed at
/usr/local/lib/python2.7/site-packages
Looking in there and I see that Django is installed.
OK, let’s use pip to uninstall Django from Python2.7, except that `pip` gives
essentially the same result as running the `django-admin` command.
OK, let’s just remove it manually. After a bit of googling I found this
[Stackoverflow](https://stackoverflow.com/a/8146552) answer on how to remove
the offending package (which is what I assumed would be the answer, but better
to check, right?)
After removing the `Django` install from Python 2.7 and running `django-admin
--version` I get

So I googled that error message and found another answers on
[Stackoverflow](https://stackoverflow.com/a/10756446) which lead me to look at
the `manage.py` file. When I `cat` the file I get:
# manage.py
#!/usr/bin/env python
import os
import sys
...
That first line SHOULD be finding the Python executable in my virtual
environment, but it’s not.
Next I googled the error message `django-admin code for hash sha384 was not
found`
Which lead to this [Stackoverflow](https://stackoverflow.com/a/60575879)
answer. I checked to see if Python2 was installed with brew using
brew leaves | grep python
which returned `python@2`
Based on the answer above, the solution was to uninstall the Python2 that was
installed by `brew`. Now, although [Python2 has
retired](https://www.python.org/doc/sunset-python-2/), I was leery of
uninstalling it on my system without first verifying that I could remove the
brew version without impacting the system version which is needed by macOS.
Using `brew info python@2` I determined where `brew` installed Python2 and
compared it to where Python2 is installed by macOS and they are indeed
different
Output of `brew info python@2`
...
/usr/local/Cellar/python@2/2.7.15_1 (7,515 files, 122.4MB) *
Built from source on 2018-08-05 at 15:18:23
...
Output of `which python`
`/usr/bin/python`
OK, now we can remove the version of Python2 installed by `brew`
brew uninstall python@2
Now with all of that cleaned up, lets try again. From a clean project
directory:
python3 -m venv venv
source venv/bin/activate
pip install django
django-admin --version
The last command returned
zsh: /usr/local/bin/django-admin: bad interpreter: /usr/local/opt/python@2/bin/python2.7: no such file or directory
3.2.4
OK, I can get the version number and it mostly works, but can I create a new
project?
django-admin startproject my_great_project .
Which returns
zsh: /usr/local/bin/django-admin: bad interpreter: /usr/local/opt/python@2/bin/python2.7: no such file or directory
BUT, the project was installed
├── db.sqlite3
├── manage.py
├── my_great_project
│ ├── __init__.py
│ ├── __pycache__
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
And I was able to run it
python manage.py runserver

Success! I’ve still got that last bug to deal with, but that’s a story for a
different day!
## Short Note
My initial fix, and my initial draft for this article, was to use the old
adage, turn it off and turn it back on. In this case, the implementation would
be the `deactivate` and then re `activate` the virtual environment and that’s
what I’d been doing.
As I was writing up this article I was hugely influenced by the work of [Julie
Evans](https://twitter.com/b0rk) and kept asking, “but why?”. She’s been
writing a lot of awesome, amazing things, and has several [zines for
purchase](https://wizardzines.com) that I would highly recommend.
She’s also generated a few [debugging
‘games’](https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/) that are
a lot of fun.
Anyway, thanks Julie for pushing me to figure out the why for this issue.
## Post Script
I figured out the error message above and figured, well, I might as well
update the post! I thought it had to do with `zsh`, but no, it was just more
of the same.
The issue was that Django had been installed in the base Python2 (which I
knew). All I had to do was to uninstall it with pip.
pip uninstall django
The trick was that pip wasn't working out for me ... it was generating errors.
So I had to run the command
python -m pip uninstall django
I had to run this AFTER I put the Django folder back into
`/usr/local/lib/python2.7/site-packages` (if you'll recall from above, I
removed it from the folder)
After that clean up was done, everything worked out as expected! I just had to
keep digging!
",2021-06-13,debugging-setting-up-a-django-project,"Normally when I start a new Django project I’ll use the PyCharm setup wizard,
but recently I wanted to try out VS Code for a Django project and was super
stumped when I would get a message like this:
ERROR:root:code for hash md5 was not found.
Traceback …
",Debugging Setting up a Django Project,https://www.ryancheley.com/2021/06/13/debugging-setting-up-a-django-project/
ryan,technology,"## Previous Efforts
When I first heard of Django I thought it looks like a really interesting, and
Pythonic way, to get a website up and running. I spent a whole weekend putting
together a site locally and then, using Digital Ocean, decided to push my idea
up onto a live site.
One problem that I ran into, which EVERY new Django Developer will run into
was static files. I couldn’t get static files to work. No matter what I did,
they were just … missing. I proceeded to spend the next few weekends trying to
figure out why, but alas, I was not very good (or patient) with reading
documentation and gave up.
Fast forward a few years, and while taking the 100 Days of Code on the Web
Python course from Talk Python to Me I was able to follow along on a part of
the course that pushed up a Django App to Heroku.
I wrote about that effort [here](https://pybit.es/my-first-django-app.html).
Needless to say, I was pretty pumped. But, I was wondering, is there a way I
can actually get a Django site to work on a non-Heroku (PaaS) type
infrastructure.
## Inspiration
While going through my Twitter timeline I cam across a retweet from
TestDrive.io of [Matt Segal](https://mattsegal.dev/simple-django-
deployment.html). He has an **amazing** walk through of deploying a Django
site on the hard level (i.e. using Windows). It’s a mix of Blog posts and
YouTube Videos and I highly recommend it. There is some NSFW language, BUT if
you can get past that (and I can) it’s a great resource.
This series is meant to be a written record of what I did to implement these
recommendations and suggestions, and then to push myself a bit further to
expand the complexity of the app.
## Articles
A list of the Articles will go here. For now, here’s a rough outline of the
planned posts:
* [Setting up the Server (on Digital Ocean)](/setting-up-the-server-on-digital-ocean.html)
* [Getting your Domain to point to Digital Ocean Your Server](/getting-your-domain-to-point-to-digital-ocean-your-server.html)
* [Preparing the code for deployment to Digital Ocean](/preparing-the-code-for-deployment-to-digital-ocean.html)
* [Automating the deployment](/automating-the-deployment.html)
* Enhancements
The ‘Enhancements’ will be multiple follow up posts (hopefully) as I catalog
improvements make to the site. My currently planned enhancements are:
* Creating the App
* [Migrating from SQLite to Postgres](/using-postgresql.html)
* Integrating Git
* [Having Multiple Sites on a single Server](/setting-up-multiple-django-sites-on-a-digital-ocean-server.html)
* Adding Caching
* Integrating S3 on AWS to store Static Files and Media Files
* Migrate to Docker / Kubernetes
",2021-01-24,deploying-a-django-site-to-digital-ocean-a-series,"## Previous Efforts
When I first heard of Django I thought it looks like a really interesting, and
Pythonic way, to get a website up and running. I spent a whole weekend putting
together a site locally and then, using Digital Ocean, decided to push my idea
up onto a live …
",Deploying a Django Site to Digital Ocean - A Series,https://www.ryancheley.com/2021/01/24/deploying-a-django-site-to-digital-ocean-a-series/
ryan,technology,"I work at a place that is heavily investing in the Microsoft Tech Stack.
Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL
Server ... etc
When not at work, I **really** like working with Python and Django. I've never
really thought I'd be able to combine the two until I discovered the package
mssql-django which was released Feb 18, 2021 in alpha and as a full-fledged
version 1 in late July of that same year.
Ever since then I've been trying to figure out how to incorporate Django into
my work life.
I'm going to use this series as an outline of how I'm working through the
process of getting Django to be useful at work. The issues I run into, and the
solutions I'm (hopefully) able to achieve.
I'm also going to use this as a more in depth analysis of an accompanying talk
I'm hoping to give at [Django Con 2022](https://2022.djangocon.us) later this
year.
I'm going to break this down into a several part series that will roughly
align with the talk I'm hoping to give. The parts will be:
1. Introduction/Background
2. Overview of the Project
3. Wiring up the Project Models
4. Database Routers
5. Django Admin Customization
6. Admin Documentation
7. Review & Resources
My intention is to publish one part every week or so. Sometimes the posts will
come fast, and other times not. This will mostly be due to how well I'm doing
with writing up my findings and/or getting screenshots that will work.
The tool set I'll be using is:
* docker
* docker-compose
* Django
* MS SQL
* SQLite
",2022-06-15,django-and-legacy-databases,"I work at a place that is heavily investing in the Microsoft Tech Stack.
Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL
Server ... etc
When not at work, I **really** like working with Python and Django. I've never
really thought I'd be able to combine the …
",Django and Legacy Databases,https://www.ryancheley.com/2022/06/15/django-and-legacy-databases/
ryan,technology,"First, what are ""the commons""? The concept of ""the commons"" refers to
resources that are shared and managed collectively by a community, rather than
being owned privately or by the state. This idea has been applied to natural
resources like air, water, and grazing land, but it has also expanded to
include digital and cultural resources, such as open-source software,
knowledge databases, and creative works.
As Organization Administrators of Django Commons, we're focusing on
sustainability and stewardship as key aspects.
Asking for help is hard, but it can be done more easily in a safe environment.
As we saw with the [xz utils
backdoor](https://en.wikipedia.org/wiki/XZ_Utils_backdoor) attack, maintainer
burnout is real. And while there are several arguments about being part of a
'supply chain' if we can, as a community, offer up a place where maintainers
can work together for the sustainability and support of their packages, Django
community will be better off!
From the [README](https://github.com/django-
commons/membership/blob/main/README.md) of the membership repo in Django
Commons
> Django Commons is an organization dedicated to supporting the community's
> efforts to maintain packages. It seeks to improve the maintenance experience
> for all contributors; reducing the barrier to entry for new contributors and
> reducing overhead for existing maintainers.
OK, but what does this new organization get me as a maintainer? The (stretch)
goal is that we'll be able to provide support to maintainers. Whether that's
helping to identify best practices for packages (like requiring tests), or
normalize the idea that maintainers can take a step back from their project
and know that there will be others to help keep the project going. Being able
to accomplish these two goals would be amazing ... but we want to do more!
In the long term we're hoping that we're able to do something to help provide
compensation to maintainers, but as I said, that's a long term goal.
The project was spearheaded by Tim Schilling and he was able to get lots of
interest from various folks in the Django Community. But I think one of the
great aspects of this community project is the transparency that we're
striving for. You can see [here](https://github.com/orgs/django-
commons/discussions/19) an example of a discussion, out in the open, as we try
to define what we're doing, together. Also, while Tim spearheaded this effort,
we're really all working as equals towards a common goal.
What we're building here is a sustainable infrastructure and community. This
community will allow packages to have a good home, to allow people to be as
active as they want to be, and also allow people to take a step back when they
need to.
Too often in tech, and especially in OSS, maintainers / developers will work
and work and work because the work they do is generally interesting, and has
interesting problems to try and solve.
But this can have a downside that we've all seen .. burnout.
By providing a platform for maintainers to 'park' their projects, along with
the necessary infrastructure to keep them active, the goal is to allow
maintainers the opportunity to take a break if, or when, they need to. When
they're ready to return, they can do so with renewed interest, with new
contributors and maintainers who have helped create a more sustainable
environment for the open-source project.
The idea for this project is very similar to, but different from, Jazz Band.
Again, from the [README](https://github.com/django-
commons/membership/blob/main/README.md)
> Django Commons and Jazzband have similar goals, to support community-
> maintained projects. There are two main differences. The first is that
> Django Commons leans into the GitHub paradigm and centers the organization
> as a whole within GitHub. This is a risk, given there's some vendor lock-in.
> However, the repositories are still cloned to several people's machines and
> the organization controls the keys to PyPI, not GitHub. If something were to
> occur, it's manageable.
>
> The second is that Django Commons is built from the beginning to have more
> than one administrator. Jazzband has been [working for a while to add
> additional roadies](https://github.com/jazzband/help/issues/196)
> (administrators), but there hasn't been visible progress. Given the
> importance of several of these projects it's a major risk to the community
> at large to have a single point of failure in managing the projects. By
> being designed from the start to spread the responsibility, it becomes
> easier to allow people to step back and others to step up, making Django
> more sustainable and the community stronger.
One of the goals for Django Commons is to be very public about what's going
on. We actively encourage use of the
[Discussions](https://github.com/orgs/django-commons/discussions) feature in
GitHub and have several active conversations happening there now1 2 3
So far we've been able to migrate ~3~ 4 libraries4 5 6 7into Django Commons.
Each one has been a great learning experience, not only for the library
maintainers, but also for the Django Commons admins.
We're working to automate as much of the work as possible. [Daniel
Moran](https://github.com/cunla/) has done an amazing job of writing Terraform
scripts to help in the automation process.
While there are still several manual steps, with each new library, we discover
new opportunities for automation.
This is an exciting project to be a part of. If you're interested in joining
us you have a couple of options
1. [Transfer your project](https://github.com/django-commons/membership/issues/new?assignees=django-commons%2Fadmins&labels=Transfer+project+in&projects=&template=transfer-project-in.yml&title=%F0%9F%9B%AC+%5BINBOUND%5D+-+%3Cproject%3E) into Django Commons
2. [Join as member](https://github.com/django-commons/membership/issues/new?assignees=django-commons%2Fadmins&labels=New+member&projects=&template=new-member.yml&title=%E2%9C%8B+%5BMEMBER%5D+-+%3Cyour+handle%3E) and help contribute to one of the projects that's already in Django Commons
I'm looking forward to seeing you be part of this amazing community!
1. [How to approach existing libraries](https://github.com/orgs/django-commons/discussions/52) ↩︎
2. [Creating a maintainer-contributor feedback loop](https://github.com/orgs/django-commons/discussions/61) ↩︎
3. [DjangoCon US 2024 Maintainership Open pace](https://github.com/orgs/django-commons/discussions/42) ↩︎
4. [django-tasks-scheduler](https://github.com/django-commons/django-tasks-scheduler) ↩︎
5. [django-typer](https://github.com/django-commons/django-typer) ↩︎
6. [django-fsm-2](https://github.com/django-commons/django-fsm-2) ↩︎
7. [django-debug-toolbar](https://github.com/django-commons/django-debug-toolbar/) ↩︎
",2024-10-23,django-commons,"First, what are ""the commons""? The concept of ""the commons"" refers to
resources that are shared and managed collectively by a community, rather than
being owned privately or by the state. This idea has been applied to natural
resources like air, water, and grazing land, but it has also expanded …
",Django Commons,https://www.ryancheley.com/2024/10/23/django-commons/
ryan,technology,"I’ve been working on a Django Project for a while and one of the apps I have
tracks candidates. These candidates have dates of a specific type.
The models look like this:
## Candidate
class Candidate(models.Model):
first_name = models.CharField(max_length=128)
last_name = models.CharField(max_length=128)
resume = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True)
cover_leter = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True)
email_address = models.EmailField(blank=True, null=True)
linkedin = models.URLField(blank=True, null=True)
github = models.URLField(blank=True, null=True)
rejected = models.BooleanField()
position = models.ForeignKey(
""positions.Position"",
on_delete=models.CASCADE,
)
hired = models.BooleanField(default=False)
## CandidateDate
class CandidateDate(models.Model):
candidate = models.ForeignKey(
""Candidate"",
on_delete=models.CASCADE,
)
date_type = models.ForeignKey(
""CandidateDateType"",
on_delete=models.CASCADE,
)
candidate_date = models.DateField(blank=True, null=True)
candidate_date_note = models.TextField(blank=True, null=True)
meeting_link = models.URLField(blank=True, null=True)
class Meta:
ordering = [""candidate"", ""-candidate_date""]
unique_together = (
""candidate"",
""date_type"",
)
## CandidateDateType
class CandidateDateType(models.Model):
date_type = models.CharField(max_length=24)
description = models.CharField(max_length=255, null=True, blank=True)
You’ll see from the CandidateDate model that the fields `candidate` and
`date_type` are unique. One problem that I’ve been running into is how to help
make that an easier thing to see in the form where the dates are entered.
The Django built in validation will display an error message if a user were to
try and select a `candidate` and `date_type` that already existed, but it felt
like this could be done better.
I did a fair amount of Googling and had a couple of different _bright_ ideas,
but ultimately it came down to a pretty simple implementation of the `exclude`
keyword in the ORM
The initial `Form` looked like this:
class CandidateDateForm(ModelForm):
class Meta:
model = CandidateDate
fields = [
""candidate"",
""date_type"",
""candidate_date"",
""meeting_link"",
""candidate_date_note"",
]
widgets = {
""candidate"": HiddenInput,
}
I updated it to include a `__init__` method which overrode the options in the
drop down.
def __init__(self, *args, **kwargs):
super(CandidateDateForm, self).__init__(*args, **kwargs)
try:
candidate = kwargs[""initial""][""candidate""]
candidate_date_set = CandidateDate.objects.filter(candidate=candidate).values_list(""date_type"", flat=True)
qs = CandidateDateType.objects.exclude(id__in=candidate_date_set)
self.fields[""date_type""].queryset = qs
except KeyError:
pass
Now, with this method the drop down will only show items which can be
selected, not all `CandidateDateType` options.
Seems like a better user experience AND I got to learn a bit about the Django
ORM
",2021-01-23,django-form-filters,"I’ve been working on a Django Project for a while and one of the apps I have
tracks candidates. These candidates have dates of a specific type.
The models look like this:
## Candidate
class Candidate(models.Model):
first_name = models.CharField(max_length=128)
last_name = models.CharField(max_length=128)
resume = models …
",Django form filters,https://www.ryancheley.com/2021/01/23/django-form-filters/
ryan,technology,"# My Experience at DjangoCon US 2023
A few days ago I returned from DjangoCon US 2023 and wow, what an amazing
time. The only regret I have is that I didn't take very many pictures. This is
something I will need to work on for next year.
On Monday October 16th I gave a talk [Contributing to Django or how I learned
to stop worrying and just try to fix an ORM
Bug](https://2023.djangocon.us/talks/contributing-to-django-or-how-i-learned-
to-stop-worrying-and-just-try-to-fix-an-orm-bug/). The video will be posted on
YouTube in a few weeks. This was the first tech conference I've ever spoken
at!!!! I was super nervous leading up to the talk, and even a bit at the
start, but once I got going I finally settled in.
Here's me on stage taking a selfie with the crowd behind me

Luckily, my talk was one of the first non-Keynote talks so I was able to relax
and enjoy the conference while the rest of the time.
After the conference talks ended on Wednesday I stuck around for the sprints.
This is such a great time to be able to work on open source projects (Django
adjacent or not) and just generally hang out with other Djangonauts. I was
able to do some work on DjangoPackages with Jeff Triplett, and just generally
hang out with some truly amazing people.
The Django community is just so great. I've been to many conferences before,
but this one is the first where I feel like I belong.
I am having some of those post conference blues, but thankfully Kojo Idrissa
wrote something about how to [help with
that](https://kojoidrissa.com/conferences/community/pycon%20africa/noramgt/2019/08/11/post_conference_depression.html).
And taking his advice, it has been helpful to come down from the Conference
high.
Although the location of DjangoCon US 2024 hasn't been announced yet, I'm
making plans to attend.
I am also setting myself some goals to have completed by the start of DCUS
2024
* join the fundraising working group
* work on at least 1 code related ticket in Trac
* work on at least 1 doc related ticket in Trac
* have been part of a writing group with fellow Djangonauts and posted at least 1 article per month
I had a great experience speaking, and I **think** I'd like to do it again,
but I'm still working through that.
It's a lot harder to give a talk than I thought it would be! That being said,
I do have in my 'To Do' app a task to 'Brainstorm DjangoCon talk ideas' so
we'll see if (1) I'm able to come up with anything, and (2) I have a talk
accepted for 2024.
",2023-10-24,djangocon-us-2023,"# My Experience at DjangoCon US 2023
A few days ago I returned from DjangoCon US 2023 and wow, what an amazing
time. The only regret I have is that I didn't take very many pictures. This is
something I will need to work on for next year.
On Monday October …
",DjangoCon US 2023,https://www.ryancheley.com/2023/10/24/djangocon-us-2023/
ryan,technology,"At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing
for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-
talk-at-a-conference/)
Well, I spoke again at DjangoCon US this year (2024) and had a similar, but
wildly different experience in preparing for my talk.
Last year I lamented that I didn't really track my time (which is weird
because I track my time for ALL sorts of things!).
This year, I did track my time and have a much better sense of how much time I
prepared for the talk.
Another difference between each year is that in 2023 I gave a 45 minute talk,
while this year my talk was 25 minutes.
I've heard that you need about 1 hour of prep time for each 1 minute of talk
that you're going to give. That means that, on average, for a 25 minute talk
I'd need about 25 hours of prep time.
[My time tracking shows](https://track.toggl.com/shared-
report/6c52f45a0feea26f7c8fd987abf73b2e) that I was a little short of that (19
hours) but my talk ended up being about 20 minutes, so it seems that maybe I
was on track for that.
This year, as last year, my general prep technique was to:
1. Give the presentation AND record it
2. Watch the recording and make notes about what I needed to change
3. Make the changes
I would typically do each step on a different day, though towards the end I
would do steps 2 and 3 on the same day, and during the last week I would do
all of the steps on the same day.
This flow really seems to help me get the most of out practicing my talk and
getting a sense of its strengths and weaknesses.
One issue that came up a week before I was to leave for DjangoCon US is that
my boss said I couldn't have anything directly related to my employer in the
presentation. My initial drafts didn't have specifics, but the examples I used
were too close for my comfort on that, so I ended up having to refactor that
part of my talk.
Honestly, I think it came out better because of it. During my practice runs I
felt like I was kind of dancing around topics, but once I removed them i felt
freer to just kind of speak my mind.
Preparing and giving talks like these are truly a ton of work. Yes, you'll
(most likely) be given a free ticket to the conference you're speaking at —
but unless you're a seasoned public speaker you will have to practice a lot to
give a great talk.
One thing I didn't mention in my prep time is that my talk was essentially
just a rendition of my series of blog posts I started writing at DjangoCon US
2023 ([Error Culture](https://www.ryancheley.com/2023/10/29/error-culture/))
So when you add in the time it took for me to brainstorm those articles,
write, and edit them, we're probably looking at another 5 - 7 hours of prep.
This puts me closer to the 25 hours of prep time for the 25 minute talk.
I've given 2 talks so far, and after each one I've said, 'Never again!'
It's been a few weeks since I gave my talk, and I have to say, I'm kind of
looking forward to trying to give a talk again next year. Now, I just need to
figure out what I would talk about that anyone would want to hear. 🤔
",2024-10-17,djangocon-us-2024-talk,"At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing
for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-
talk-at-a-conference/)
Well, I spoke again at DjangoCon US this year (2024) and had a similar, but
wildly different experience in preparing for my talk.
Last year I lamented that I didn't really track my …
",DjangoCon US 2024 Talk,https://www.ryancheley.com/2024/10/17/djangocon-us-2024-talk/
ryan,technology,"I had read about a project called djhtml and wanted to use it on one of my
projects. The documentation is really good for adding it to precommit-ci, but
I wasn't sure what I needed to do to just run it on the command line.
It took a bit of googling, but I was finally able to get the right incantation
of commands to be able to get it to run on my templates:
djhtml -i $(find templates -name '*.html' -print)
But of course because I have the memory of a goldfish and this is more than 3
commands to try to remember to string together, instead of telling myself I
would remember it, I simply added it to a just file and now have this recipe:
# applies djhtml linting to templates
djhtml:
djhtml -i $(find templates -name '*.html' -print)
This means that I can now run `just djhtml` and I can apply djhtml's linting
to my templates.
Pretty darn cool if you ask me. But then I got to thinking, I can make this a
bit more general for 'linting' type activities. I include all of these in my
precommit-ci, but I figured, what the heck, might as well have a just recipe
for all of them!
So I refactored the recipe to be this:
# applies linting to project (black, djhtml, flake8)
lint:
djhtml -i $(find templates -name '*.html' -print)
black .
flake8 .
And now I can run all of these linting style libraries with a single command
`just lint`
",2021-08-22,djhtml-and-justfile,"I had read about a project called djhtml and wanted to use it on one of my
projects. The documentation is really good for adding it to precommit-ci, but
I wasn't sure what I needed to do to just run it on the command line.
It took a bit of …
",djhtml and justfile,https://www.ryancheley.com/2021/08/22/djhtml-and-justfile/
ryan,technology,"In one of my [previous
posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked
through how I generated a wordcloud based on my most recent 20 tweets. I
though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com)
file names as well. just to see if I could.
When I first tried to do it (as previously stated, the Twitter Word Cloud post
was the first python script I wrote) I ran into some difficulties. I didn't
really understand what I was doing (although I still don't **really**
understand, I at least have a vague idea of what the heck I'm doing now).
The script isn't much different than the [Twitter](https://www.twitter.com)
word cloud. The only real differences are:
1. the way in which the `words` variable is being populated
2. the mask that I'm using to display the cloud
In order to go get the information from the file system I use the `glob`
library:
import glob
The next lines have not changed
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
from scipy.misc import imread
Instead of writing to a 'tweets' file I'm looping through the files, splitting
them at the `/` character and getting the last item (i.e. the file name) and
appending it to the list `f`:
f = []
for filename in glob.glob('/Users/Ryan/Dropbox/Ryan/**/*', recursive=True):
f.append(filename.split('/')[-1])
The rest of the script generates the image and saves it to my Dropbox Account.
Again, instead of using a [Twitter](https://www.twitter.com) logo, I'm using a
**Cloud** image I found [here](http://www.shapecollage.com/shapes/mask-
cloud.png)
words = ' '
for line in f:
words= words + line
stopwords = {'https'}
logomask = imread('mask-cloud.png')
wordcloud = WordCloud(
font_path='/Users/Ryan/Library/Fonts/Inconsolata.otf',
stopwords=STOPWORDS.union(stopwords),
background_color='white',
mask = logomask,
max_words=1000,
width=1800,
height=1400
).generate(words)
plt.imshow(wordcloud.recolor(color_func=None, random_state=3))
plt.axis('off')
plt.savefig('/Users/Ryan/Dropbox/Ryan/Post Images/dropbox_wordcloud.png', dpi=300)
plt.show()
And we get this:

",2016-11-25,dropbox-files-word-cloud,"In one of my [previous
posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked
through how I generated a wordcloud based on my most recent 20 tweets. I
though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com)
file names as well. just to see if I could.
When I first tried to do it …
",Dropbox Files Word Cloud,https://www.ryancheley.com/2016/11/25/dropbox-files-word-cloud/
ryan,technology,"Integrating a version control system into your development cycle is just kind
of one of those things that you do, right? I use GutHub for my version
control, and it’s GitHub Actions to help with my deployment process.
There are 3 `yaml` files I have to get my local code deployed to my production
server:
* django.yaml
* dev.yaml
* prod.yaml
Each one serving it’s own purpose
## django.yaml
The `django.yaml` file is used to run my tests and other actions on a GitHub
runner. It does this in 9 distinct steps and one Postgres service.
The steps are:
1. Set up Python 3.8 - setting up Python 3.8 on the docker image provided by GitHub
2. psycopg2 prerequisites - setting up `psycopg2` to use the Postgres service created
3. graphviz prerequisites - setting up the requirements for graphviz which creates an image of the relationships between the various models
4. Install dependencies - installs all of my Python package requirements via pip
5. Run migrations - runs the migrations for the Django App
6. Load Fixtures - loads data into the database
7. Lint - runs `black` on my code
8. Flake8 - runs `flake8` on my code
9. Run Tests - runs all of the tests to ensure they pass
name: Django CI
on:
push:
branches-ignore:
- main
- dev
jobs:
build:
runs-on: ubuntu-18.04
services:
postgres:
image: postgres:12.2
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5432:5432
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v1
- name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: psycopg2 prerequisites
run: sudo apt-get install python-dev libpq-dev
- name: graphviz prerequisites
run: sudo apt-get install graphviz libgraphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install psycopg2
pip install -r requirements/local.txt
- name: Run migrations
run: python manage.py migrate
- name: Load Fixtures
run: |
python manage.py loaddata fixtures/User.json
python manage.py loaddata fixtures/Sport.json
python manage.py loaddata fixtures/League.json
python manage.py loaddata fixtures/Conference.json
python manage.py loaddata fixtures/Division.json
python manage.py loaddata fixtures/Venue.json
python manage.py loaddata fixtures/Team.json
- name: Lint
run: black . --check
- name: Flake8
uses: cclauss/GitHub-Action-for-Flake8@v0.5.0
- name: Run tests
run: coverage run -m pytest
## dev.yaml
The code here does essentially they same thing that is done in the `deploy.sh`
in my earlier post [Automating the Deployment](/automating-the-
deployment.html) except that it pulls code from my `dev` branch on GitHub onto
the server. The other difference is that this is on my UAT server, not my
production server, so if something goes off the rails, I don’t hose
production.
name: Dev CI
on:
pull_request:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.2
with:
host: ${{ secrets.SSH_HOST_TEST }}
key: ${{ secrets.SSH_KEY_TEST }}
username: ${{ secrets.SSH_USERNAME }}
script: |
rm -rf StadiaTracker
git clone --branch dev git@github.com:ryancheley/StadiaTracker.git
source /home/stadiatracker/venv/bin/activate
cd /home/stadiatracker/
rm -rf /home/stadiatracker/StadiaTracker
cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
python /home/stadiatracker/StadiaTracker/manage.py migrate
mkdir /home/stadiatracker/StadiaTracker/static
mkdir /home/stadiatracker/StadiaTracker/staticfiles
python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
systemctl daemon-reload
systemctl restart stadiatracker
## prod.yaml
Again, the code here does essentially they same thing that is done in the
`deploy.sh` in my earlier post [Automating the Deployment](/automating-the-
deployment.html) except that it pulls code from my `main` branch on GitHub
onto the server.
name: Prod CI
on:
pull_request:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.2
with:
host: ${{ secrets.SSH_HOST }}
key: ${{ secrets.SSH_KEY }}
username: ${{ secrets.SSH_USERNAME }}
script: |
rm -rf StadiaTracker
git clone git@github.com:ryancheley/StadiaTracker.git
source /home/stadiatracker/venv/bin/activate
cd /home/stadiatracker/
rm -rf /home/stadiatracker/StadiaTracker
cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
python /home/stadiatracker/StadiaTracker/manage.py migrate
mkdir /home/stadiatracker/StadiaTracker/static
mkdir /home/stadiatracker/StadiaTracker/staticfiles
python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
systemctl daemon-reload
systemctl restart stadiatracker
The general workflow is:
1. Create a branch on my local computer with `git switch -c branch_name`
2. Push the code changes to GitHub which kicks off the `django.yaml` workflow.
3. If everything passes then I do a pull request from `branch_name` into `dev`.
4. This kicks off the `dev.yaml` workflow which will update UAT
5. I check UAT to make sure that everything works like I expect it to (it almost always does … and when it doesn’t it’s because I’ve mucked around with a server configuration which is the problem, not my code)
6. I do a pull request from `dev` to `main` which updates my production server
My next enhancement is to kick off the `dev.yaml` process if the tests from
`django.yaml` all pass, i.e. do an auto merge from `branch_name` to `dev`, but
I haven’t done that yet.
",2021-03-14,enhancements-using-github-actions-to-deploy,"Integrating a version control system into your development cycle is just kind
of one of those things that you do, right? I use GutHub for my version
control, and it’s GitHub Actions to help with my deployment process.
There are 3 `yaml` files I have to get my local …
",Enhancements: Using GitHub Actions to Deploy,https://www.ryancheley.com/2021/03/14/enhancements-using-github-actions-to-deploy/
ryan,technology,"On my way back from Arizona a few weeks ago I decided to play around with
Drafts a bit. Now I use Drafts every day. When it went to a subscription model
more than a year ago it was a no brainer for me. This is a seriously powerful
app when you need it.
But since my initial workflows and shortcuts I've not really done too
[much](/creating-hastags-for-social-media-with-a-drafts-action.html) with it.
But after listening to some stuff from [Tim Nahumck](https://nahumck.me) I
decided I needed to invest a little time ... and honestly there's no better
time than cruising at 25k feet on your way back from Phoenix.
Ok, first of all I never really understood workspaces. I had some set up but I
didn't get it. That was the first place I started.
Each workspace can have its own action and keyboard shortcut thing which I
didn't realize. This has so much potential. I can create workspaces for all
sorts of things and have the keyboard shortcut things I need when I need them!
This alone is mind blowing and I'm disappointed I didn't look into this
feature sooner.
I have 4 workspaces set up:
* OF Templates
* O3
* Scrum
* post ideas
Initially since I didn't really understand the power of the workspace I had
them mostly as filtering tools to be used when trying to find a draft. But now
with the custom action and keyboards for each workspace I have them set up to
filter down to specific tags AND use their own keyboards.
The OF Template workspace is used to create OmniFocus projects based on
Taskpaper markup. There are a ton of different actions that I took from [Rose
Orchard](https://www.relay.fm/people/rose-orchard) (of
[Automators](https://automators.fm) fame) that help to either add items with
the correct syntax to a Task Paper markdown file OR turn the whole thing into
an OmniFocus project. Simply a life saver for when I really know all of the
steps that are going to be involved in a project and I want to write them all
down!
The O3 workspace is used for processing the notes from the one-on-one I have
with my team. There's really only two actions: Parse O3 notes and Add to O3
notes. How are these different? I have a Siri Shortcut that populates a Draft
with a template that collects the name of the person and the date time that
the O3 occurred. This is the note that is parsed by the first action. The
second action is used when someone does something that I want to remember
(either good or bad) so that I can bring it up at a more appropriate time (the
best time to tell someone about a behavior is right now, but sometimes
circumstances prevent that) so I have this little action.
In both cases they append data to a markdown file in Dropbox (i have one file
per person that reports to me). The Shortcut also takes any actions that need
to be completed and adds them to OmniFocus for me to review later.
The third workspace is Scrum. This workspace has just one action which is
""Parse scrum notes"". Again, I have a template that is generated from Siri
Shortcuts and dropped into Drafts. During the morning standup meetings I have
with my team this Draft will have the things I did yesterday, what I'm working
on today, and any roadblocks that I have. It also create a section where I can
add actions which when the draft is parsed goes into OmniFocus for me to
review later (currently the items get added with a due date of today at 1pm
... but I need to revisit that).
The last workspace is post ideas (which is where I'm writing this from). Its
custom keyboard is just a markdown one with quick ways to add markdown syntax
and a Preview button so I can see what the markdown will render out as.
It's still a work in progress as this draft will end up in Ulysses so it can
get posted to my site, [but I've seen that I can even post from Drafts to
Wordpress](https://www.macstories.net/reviews/drafts-5-4-siri-shortcuts-
wordpress-and-more/) so I'm going to give that a shot later on.
There are several other ideas I have bouncing around in my head about ideas
for potential workspaces. My only concern at this point is how many workspaces
can I have before there are too many to be used effectively.
So glad I had the time on the flight to take a look at workspaces. A huge
productivity boost for me!
",2019-05-05,figuring-out-how-drafts-really-works,"On my way back from Arizona a few weeks ago I decided to play around with
Drafts a bit. Now I use Drafts every day. When it went to a subscription model
more than a year ago it was a no brainer for me. This is a seriously powerful
app …
",Figuring out how Drafts REALLY works,https://www.ryancheley.com/2019/05/05/figuring-out-how-drafts-really-works/
ryan,technology,"I’ve written before about how easy it is to update your version of Python
using homebrew. And it totally is easy.
The thing that isn’t super clear is that when you do update Python via
Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️
I did a bit of searching to find this nice [post on the JetBrains
forum](https://intellij-support.jetbrains.com/hc/en-
us/community/posts/360000306410-Cannot-use-system-interpreter-in-PyCharm-
Pro-2018-1) which indicated
> > unfortunately it's a known issue:
> . Please close Pycharm and
> remove jdk.table.xml file from \~/Library/Preferences/.PyCharm2018.1/options
> directory, then start Pycharm again.
OK. I removed the file, but then you have to rebuild the virtual environments
because that file is what stores PyCharms knowledge of those virtual
environments.
In order to get you back to where you need to be, do the following (after
removing the `jdk.table.xml` file:
1. pip-freeze > requirements.txt
2. Remove old virtual environment `rm -r venv`
3. Create a new Virtual Environemtn with PyCharm
1. Go to Preferences
2. Project > Project Interpreter
3. Show All
4. Click ‘+’ button
4. `pip install -r requirements.txt`
5. Restart PyCharm
6. You're back
This is a giant PITA but thankfully it didn’t take too much to find the issue,
nor to fix it. With that being said, I totally shouldn’t have to do this. But
I’m writing it down so that once Python 3.8 is available I’ll be able to
remember what I did to fix going from Python 3.7.1 to 3.7.5.
",2019-11-14,fixing-a-pycharm-issue-when-updating-python-made-via-homebrew,"I’ve written before about how easy it is to update your version of Python
using homebrew. And it totally is easy.
The thing that isn’t super clear is that when you do update Python via
Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️
I did a …
",Fixing a PyCharm issue when updating Python made via HomeBrew,https://www.ryancheley.com/2019/11/14/fixing-a-pycharm-issue-when-updating-python-made-via-homebrew/
ryan,technology,"In my last post I indicated that I may need to
> reinstalling everything on the Pi and starting from scratch
While speaking about my issues with `pip3` and `python3`. Turns out that the
fix was easier than I though. I checked to see what where `pip3` and `python3`
where being executed from by running the `which` command.
The `which pip3` returned `/usr/local/bin/pip3` while `which python3` returned
`/usr/local/bin/python3`. This is exactly what was causing my problem.
To verify what version of python was running, I checked `python3 --version`
and it returned `3.6.0`.
To fix it I just ran these commands to _unlink_ the new, broken versions:
`sudo unlink /usr/local/bin/pip3`
And
`sudo unlink /usr/local/bin/python3`
I found this answer on
[StackOverflow](https://stackoverflow.com/questions/7679674/changing-default-
python-to-another-version ""Of Course the answer was on Stack Overflow!"") and
tweaked it slightly for my needs.
Now, when I run `python --version` I get `3.4.2` instead of `3.6.0`
Unfortunately I didn’t think to run the `--version` flag on pip before and
after the change, and I’m hesitant to do it now as it’s back to working.
",2018-02-13,fixing-the-python-3-problem-on-my-raspberry-pi,"In my last post I indicated that I may need to
> reinstalling everything on the Pi and starting from scratch
While speaking about my issues with `pip3` and `python3`. Turns out that the
fix was easier than I though. I checked to see what where `pip3` and `python3`
where being …
",Fixing the Python 3 Problem on my Raspberry Pi,https://www.ryancheley.com/2018/02/13/fixing-the-python-3-problem-on-my-raspberry-pi/
ryan,technology,"I was listening to the most recent episode of
[ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer
test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard
of before.
I decided that I’d give it a shot when I got home using Python and Bash, just
to see if I could (I was sure I could, but you know, wanted to make sure).
Sure enough, with a bit of googling to remember some syntax of Python, and
learn some syntax for bash, I had two stupid little programs for fizz buzz.
## Python
def main():
my_number = input(""Enter a number: "")
if not my_number.isdigit():
return
else:
my_number = int(my_number)
if my_number%3 == 0 and my_number%15!=0:
print(""fizz"")
elif my_number%5 == 0 and my_number%15!=0:
print(""buzz"")
elif my_number%15 == 0:
print(""fizz buzz"")
else:
print(my_number)
if __name__ == '__main__':
main()
## Bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
#! /bin/bash
echo ""Enter a Number: ""
read my_number
re='^[+-]?[0-9]+$'
if ! [[ $my_number =~ $re ]] ; then
echo ""error: Not a number"" >&2; exit 1
fi
if ! ((my_number % 3)) && ((my_number % 15)); then
echo ""fizz""
elif ! ((my_number % 5)) && ((my_number % 15)); then
echo ""buzz""
elif ! ((my_number % 15)) ; then
echo ""fizz buzz""
else
echo my_number
fi
---|---
And because if it isn’t in GitHub it didn’t happen, I committed it to my
[fizz-buzz repo](https://github.com/ryancheley/fizz-buzz).
I figure it might be kind of neat to write it in as many languages as I can,
you know … for when I’m bored.
",2018-11-28,fizz-buzz,"I was listening to the most recent episode of
[ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer
test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard
of before.
I decided that I’d give it a shot when I got home using Python and Bash, just
to see if I could …
",Fizz Buzz,https://www.ryancheley.com/2018/11/28/fizz-buzz/
ryan,technology,"Special Thanks to [Jeff Triplett](https://mastodon.social/@webology) who
provided an example that really got me started on better understanding of how
this all works.
In trying to wrap my head around MCPs over the long Memorial weekend I had a
breakthrough. I'm not really sure why this was so hard for me to
[grok](https://en.wikipedia.org/wiki/Grok), but now something seems to have
clicked.
I am working with [Pydantic AI](https://ai.pydantic.dev/) and so I'll be using
that as an example, but since MCPs are a standard protocol, these concepts
apply broadly across different implementations.
## What is Model Context Protocol (MCP)?
Per the [Anthropic announcement](https://www.anthropic.com/news/model-context-
protocol) (from November 2024!!!!)
> The Model Context Protocol is an open standard that enables developers to
> build secure, two-way connections between their data sources and AI-powered
> tools. The architecture is straightforward: developers can either expose
> their data through MCP servers or build AI applications (MCP clients) that
> connect to these servers.
What this means is that there is a standard way to extend models like Claude,
or OpenAI to include other information. That information can be files on the
file system, data in a database, etc.
## (Potential) Real World Example
I work for a Healthcare organization in Southern California. One of the
biggest challenges with onboarding new hires (and honestly can be a challenge
for people that have been with the organization for a long time) is who to
reach out to for support on which specific application.
Typically a user will send an email to one of the support teams, and the email
request can get bounced around for a while until it finally lands on the
'right' support desk. There's the potential to have the applications
themselves include who to contact, but some applications are vendor supplied
and there isn't always a way to do that.
Even if there were, in my experience those are often not noticed by users OR
the users will think that the support email is for non-technical issues, like
""Please update the phone number for this patient"" and not issues like, ""The
web page isn't returning any results for me, but it is for my coworker.""
## Enter an MCP with a Local LLM
Let's say you have a service that allows you to search through a file system
in a predefined set of directories. This service is run with the following
command
npx -y --no-cache @modelcontextprotocol/server-filesystem /path/to/your/files
In Pydantic AI the use of the MCPServerStdio is using this same syntax only it
breaks it into two parts
* command
* args
The command is any application in your $PATH like `uvx` or `docker` or `npx`,
or you can explicitly define where the executable is by calling out its path,
like `/Users/ryancheley/.local/share/mise/installs/bun/latest/bin/bunx`
The args are the commands you'd pass to your application.
Taking the command from above and breaking it down we can set up our MCP using
the following
MCPServerStdio(
""npx"",
args=[
""-y"",
""--no-cache"",
""@modelcontextprotocol/server-filesystem"",
""/path/to/your/files"",
]
## Application of MCP with the Example
Since I work in Healthcare, and I want to be mindful of the protection of
patient data, even if that data won't be exposed to this LLM, I'll use ollama
to construct my example.
I created a `support.csv` file that contains the following information
* Common Name of the Application
* URL of the Application
* Support Email
* Support Extension
* Department
I used the following prompt
> Review the file `support.csv` and help me determine who I contact about
> questions related to CarePath Analytics.
Here are the contents of the `support.csv` file
Name | URL | Support Email | Support Extension | Department
---|---|---|---|---
MedFlow Solutions | https://medflow.com | support@medflow.com | 1234 |
Clinical Systems
HealthTech Portal | https://healthtech-portal.org | help@medflow.com | 3456 |
Patient Services
CarePath Analytics | https://carepath.io | support@medflow.com | 4567 | Data
Analytics
VitalSign Monitor | https://vitalsign.net | support@medflow.com | 1234 |
Clinical Systems
Patient Connect Hub | https://patientconnect.com | contact@medflow.com | 3456
| Patient Services
EHR Bridge | https://ehrbridge.org | support@medflow.com | 2341 | Integration
Services
Clinical Workflow Pro | https://clinicalwf.com | support@medflow.com | 1234 |
Clinical Systems
HealthData Sync | https://healthdata-sync.net | sync@medflow.com | 6789 |
Integration Services
TeleHealth Connect | https://telehealth-connect.com | help@medflow.com | 3456
| Patient Services
MedRecord Central | https://medrecord.central | records@medflow.com | 5678 |
Medical Records
The script is below:
# /// script
# requires-python = "">=3.12""
# dependencies = [
# ""pydantic-ai"",
# ]
# ///
import asyncio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.providers.openai import OpenAIProvider
async def main():
# Configure the Ollama model using OpenAI-compatible API
model = OpenAIModel(
model_name='qwen3:8b', # or whatever model you have installed locally
provider=OpenAIProvider(base_url='http://localhost:11434/v1')
)
# Set up the MCP server to access our support files
support_files_server = MCPServerStdio(
""npx"",
args=[
""-y"",
""@modelcontextprotocol/server-filesystem"",
""/path/to/your/files"" # Directory containing support.csv
]
)
# Create the agent with the model and MCP server
agent = Agent(
model=model,
mcp_servers=[support_files_server],
)
# Run the agent with the MCP server
async with agent.run_mcp_servers():
# Get response from Ollama about support contact
result = await agent.run(
""Review the file `support.csv` and help me determine who I contact about questions related to CarePath Analytics?"" )
print(result.output)
if __name__ == ""__main__"":
asyncio.run(main())
As a user, if I ask, who do I contact about questions related to CarePath
Analytics the LLM will search through the `support.csv` file and supply the
email contact.
This example shows a command line script, and a Web Interface would probably
be better for most users. That would be the next thing I'd try to do here.
Once that was done you could extend it to also include an MCP to write an
email on the user's behalf. It could even ask probing questions to help make
sure that the email had more context for the support team.
Some support systems have their own ticketing / issue tracking systems and it
would be really valuable if this ticket could be written directly to that
system. With the MCP this is possible.
We'd need to update the `support.csv` file with some information about direct
writes via an API, and we'd need to secure the crap out of this, but it is
possible.
Now, the user can be more confident that their issue will go to the team that
it needs to and that their question / issue can be resolved much more quickly.
",2025-06-02,fun-with-mcps,"Special Thanks to [Jeff Triplett](https://mastodon.social/@webology) who
provided an example that really got me started on better understanding of how
this all works.
In trying to wrap my head around MCPs over the long Memorial weekend I had a
breakthrough. I'm not really sure why this was so hard for me …
",Fun with MCPs,https://www.ryancheley.com/2025/06/02/fun-with-mcps/
ryan,technology,"[Last October it was announced](https://www.fiercehealthcare.com/health-
tech/google-health-notches-another-provider-partner-care-studio) that Desert
Oasis Healthcare (the company I work for) signed on to pilot [Google's Care
Studio](https://health.google/caregivers/care-studio/). DOHC is the first
ambulatory clinic to sign on.
I had been in some of the discovery meetings before the announcement and was
really excited about the opportunity. So far our use of any Cloud platforms at
work has been extremely limited (that is to say, we don't use ANY of the big
three cloud solutions for our tech) so this seemed to provide a really good
opportunity.
As we worked through the project scoping there were conversations about the
handoff to DOHC and it occurred to me that I didn't have any knowledge of what
GCP offered, what any of it did, or how any of it could work.
I've had on my 'To Do' list to learn one of the Big Three Cloud services (AWS,
Azure, or GCP) but because we didn't use ANY of them at work I was (a) worried
about picking the 'wrong' one and (b) worried that even if I picked one I'd
NEVER be able to use it!
The partnership with Google changed that. Suddenly which cloud service to
learn was apparent AND I'd be able to use whatever I learned for work!
Great, now I know which cloud service to start to learn about ... the next
question is, ""What do I try to learn?"". In speaking with some of the folks at
Google they recommended one of three Certification options:
1. [Digital Cloud Leader](https://cloud.google.com/certification/cloud-digital-leader)
2. [Cloud Engineer](https://cloud.google.com/certification/cloud-engineer)
3. [Cloud Architect](https://cloud.google.com/certification/cloud-architect)
After reviewing each of them and having a good idea of what I **need** to know
for work, I opted for the Cloud Architect path.
Knowing which certification I was going to work towards, I started to see what
learning options were available for me. It just so happens that [Coursera
partnered with the California State Library to offer free
training](https://blog.coursera.org/coursera-partners-with-the-california-
state-library-to-launch-free-statewide-job-training-program/) which is great
because Coursera has [a learning path for the Cloud Architect
Exam](https://www.coursera.org/professional-certificates/gcp-cloud-architect)!
So I signed up for the first course of that path right before Thanksgiving and
started to work my way through the courses.
I spent most of the holidays working through these courses, going pretty fast
through them. The labs offered up are so helpful. They actually allow you to
work with GCP for FREE during your labs which is amazing.
After I made my way through the Coursera learning Path I bought the book
[Google Cloud Certified Professional Cloud Architect Study
Guide](https://www.amazon.com/dp/1119871050?psc=1&ref=ppx_yo2ov_dt_b_product_details)
which was really helpful. It came with 100 electronic flash cards and 2
practice exams, and each chapter had questions at the end.
I will say that the practice exams and chapter questions from the book weren't
really like the ACTUAL exam questions BUT it did help me in my learning,
especially regarding the case studies used in the exams.
I read through the book several times, and used the practice questions in the
chapters to drive what parts of the documentation I'd read to shore up my
understand of the topics.
Finally, after about 3 months of pretty constant studying I took the test. I
opted for the remote proctoring option and I'd say that I really liked this
option. I was able to take the test in the same place I had done most of my
studying. I did have to remove essentially EVERYTHING from my home office, but
not having to drive anywhere, and not having to worry about unfamiliar
surroundings really helped me out (I think).
I had 2 hours in which to answer 60 questions. My general strategy for taking
tests is to go through the test, mark questions that I'm unsure of and
eliminate answers that I know to not be true on those questions. Once I've
gone through the test I revisit all of the unsure questions and work through
those.
My final pass is to go through ALL of the questions and make sure I didn't do
something silly.
Using this strategy I used 1 hour and 50 minutes of the 2 hours ... and I
passed!
The unfortunate part of the test is that you only get a Pass or Fail so you
don't have any opportunity to know what parts of the exam you missed. Now, if
you fail this could be a huge help in working to pass it next time, but even
if you pass it I think it would be helpful to know what areas you might
struggle in.
All in all this was a pretty great experience and it's already helping with
the GCP implementation at work. I'm able to ask better questions because I'm
at least aware of the various services and what they do.
",2023-04-01,gcp-cloud-architect-exam-experience,"[Last October it was announced](https://www.fiercehealthcare.com/health-
tech/google-health-notches-another-provider-partner-care-studio) that Desert
Oasis Healthcare (the company I work for) signed on to pilot [Google's Care
Studio](https://health.google/caregivers/care-studio/). DOHC is the first
ambulatory clinic to sign on.
I had been in some of the discovery meetings before the announcement and was
really excited about the opportunity. So …
",GCP Cloud Architect Exam Experience,https://www.ryancheley.com/2023/04/01/gcp-cloud-architect-exam-experience/
ryan,technology,"I use Hover for my domain purchases and management. Why? Because they have a
clean, easy to use, not-slimy interface, and because I listed to enough Tech
Podcasts that I’ve drank the Kool-Aid.
When I was trying to get my Hover Domain to point to my Digital Ocean server
it seemed much harder to me than it needed to be. Specifically, I couldn’t
find any guide on doing it! Many of the tutorials I did find were basically
like, it’s all the same. We’ll show you with GoDaddy and then you can figure
it out.
Yes, I can figure it out, but it wasn’t as easy as it could have been. That’s
why I’m writing this up.
## Digital Ocean
From Droplet screen click ‘Add a Domain’

Add 2 ‘A’ records (one for www and one without the www)

Make note of the name servers

## Hover
In your account at Hover.com change your Name Servers to Point to Digital
Ocean ones from above.

## Wait
DNS … does anyone _really_ know how it works?1 I just know that sometimes when
I make a change it’s out there almost immediately for me, and sometimes it
takes hours or days.
At this point, you’re just going to potentially need to wait. Why? Because DNS
that’s why. Ugh!
## Setting up directory structure
While we’re waiting for the DNS to propagate, now would be a good time to set
up some file structures for when we push our code to the server.
For my code deploy I’ll be using a user called `burningfiddle`. We have to do
two things here, create the user, and add them to the `www-data` user group on
our Linux server.
We can run these commands to take care of that:
adduser --disabled-password --gecos """" yoursite
The first line will add the user with no password and disable them to be able
to log in until a password has been set. Since this user will NEVER log into
the server, we’re done with the user creation piece!
Next, add the user to the proper group
adduser yoursite www-data
Now we have a user and they’ve been added to the group we need them to be
added. In creating the user, we also created a directory for them in the
`home` directory called `yoursite`. You should now be able to run this command
without error
ls /home/yoursite/
If that returns an error indicating no such directory, then you may not have
created the user properly.
Now we’re going to make a directory for our code to be run from.
mkdir /home/yoursite/yoursite
To run our Django app we’ll be using virtualenv. We can create our virtualenv
directory by running this command
python3 -m venv /home/yoursite/venv
## Configuring Gunicorn
There are two files needed for Gunicorn to run:
* gunicorn.socket
* gunicorn.service
For our setup, this is what they look like:
# gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
# gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=yoursite
EnvironmentFile=/etc/environment
Group=www-data
WorkingDirectory=/home/yoursite/yoursite
ExecStart=/home/yoursite/venv/bin/gunicorn
--access-logfile -
--workers 3
--bind unix:/run/gunicorn.sock
yoursite.wsgi:application
[Install]
WantedBy=multi-user.target
For more on the details of the sections in both `gunicorn.service` and
`gunicorn.socket` see this
[article](https://www.digitalocean.com/community/tutorials/understanding-
systemd-units-and-unit-files ""Understanding systemd units and unit files"").
## Environment Variables
The only environment variables we have to worry about here (since we’re using
SQLite) are the DJANGO_SECRET_KEY and DJANGO_DEBUG
We’ll want to edit `/etc/environment` with our favorite editor (I’m partial to
`vim` but use whatever you like
vim /etc/environment
In this file you’ll add your DJANGO_SECRET_KEY and DJANGO_DEBUG. The file will
look something like this once you’re done:
PATH=""/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games""
DJANGO_SECRET_KEY=my_super_secret_key_goes_here
DJANGO_DEBUG=False
## Setting up Nginx
Now we need to create our `.conf` file for Nginx. The file needs to be placed
in `/etc/nginx/sites-available/$sitename` where `$sitename` is the name of
your site. fn
The final file will look (something) like this fn
server {
listen 80;
server_name www.yoursite.com yoursite.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/yoursite/yoursite/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
The `.conf` file above tells Nginx to listen for requests to either
`www.buringfiddle.com` or `buringfiddle.com` and then route them to the
location `/home/yoursite/yoursite/` which is where our files are located for
our Django project.
With that in place all that’s left to do is to make it enabled by running
replacing `$sitename` with your file
ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
You’ll want to run
nginx -t
to make sure there aren’t any errors. If no errors occur you’ll need to
restart Nginx
systemctl restart nginx
The last thing to do is to allow full access to Nginx. You do this by running
ufw allow 'Nginx Full'
1. Probably just [Julia Evans](https://jvns.ca/blog/how-updating-dns-works/ ↩︎
",2021-02-07,getting-your-domain-to-point-to-digital-ocean-your-server,"I use Hover for my domain purchases and management. Why? Because they have a
clean, easy to use, not-slimy interface, and because I listed to enough Tech
Podcasts that I’ve drank the Kool-Aid.
When I was trying to get my Hover Domain to point to my Digital Ocean server …
",Getting your Domain to point to Digital Ocean Your Server,https://www.ryancheley.com/2021/02/07/getting-your-domain-to-point-to-digital-ocean-your-server/
ryan,technology,"As I've been writing up my posts for the last couple of days I've been using
the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems)
[Text Editor](https://en.wikipedia.org/wiki/Text_editor)
[BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the
things that has been tripping me up though are my 'Windows' tendencies on the
keyboard. Specifically, my muscle memory of the use and behavior of the
`Home`, `End`, `PgUp` and `PgDn` keys. The default behavior for these keys in
BBEdit are not what I needed (nor wanted). I lived with it for a couple of
days figuring I'd get used to it and that would be that.
While driving home from work today I was listening to [ATP Episode
196](https://atp.fm/episodes/196) and their Post-Show discussion of the recent
departure of [Sal Soghoian](https://en.wikipedia.org/wiki/Sal_Soghoian) who
was the Project Manager for the macOS automation. I'm not sure why, but
suddenly it clicked with me that I could probably change the behavior of the
keys through the Preferences for the Keyboard (either system wide, or just in
the Application).
When I got home I fired up
[BBEdit](http://www.barebones.com/products/bbedit/index.html) and jumped into
the preferences and saw this:

I made a couple of changes, and now the keys that I use to navigate through
the text editor are now how I want them to be:

Nothing too fancy, or anything, but goodness, does it feel right to have the
keys work the way I need them to.
",2016-11-22,home-end-pgup-pgdn-bbedit-preferences,"As I've been writing up my posts for the last couple of days I've been using
the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems)
[Text Editor](https://en.wikipedia.org/wiki/Text_editor)
[BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the
things that has been tripping me up though are my 'Windows' tendencies on the
keyboard. Specifically, my muscle memory of the use and behavior of …
","Home, End, PgUp, PgDn ... BBEdit Preferences",https://www.ryancheley.com/2016/11/22/home-end-pgup-pgdn-bbedit-preferences/
ryan,technology,"I created a Django site to troll my cousin Barry who is a big [San Diego
Padres](https://www.mlb.com/padres ""San Diego Padres"") fan. Their Shortstop is
a guy called [Fernando Tatis Jr.](https://www.baseball-
reference.com/players/t/tatisfe02.shtml ""Fernando “Error Maker” Tatis Jr."")
and he’s really good. Like **really** good. He’s also young, and arrogant, and
is everything an old dude like me doesn’t like about the ‘new generation’ of
ball players that are changing the way the game is played.
In all honesty though, it’s fun to watch him play (anyone but the Dodgers).
The thing about him though, is that while he’s really good at the plate, he’s
less good at playing defense. He currently leads the league in errors. Not
just for all shortstops, but for ALL players!
Anyway, back to the point. I made this Django site call [Does Tatis Jr Have an
Error Today?](https://www.doestatisjrhaveanerrortoday.com ""Not Yet"")It is a
simple site that only does one thing ... tells you if Tatis Jr has made an
error today. If he hasn’t, then it says `No`, and if he has, then it says
`Yes`.
It’s a dumb site that doesn’t do anything else. At all.
But, what it did do was lead me down a path to answer the question, “How does
my site connect to the internet anyway?”
Seems like a simple enough question to answer, and it is, but it wasn’t really
what I thought when I started.
## How it works
I use a MacBook Pro to work on the code. I then deploy it to a Digital Ocean
server using GitHub Actions. But they say, a picture is worth a thousand
words, so here's a chart of the workflow:

This shows the development cycle, but that doesn’t answer the question, how
does the site connect to the internet!
How is it that when I go to the site, I see anything? I thought I understood
it, and when I tried to actually draw it out, turns out I didn't!
After a bit of Googling, I found [this](https://serverfault.com/a/331263 ""How
does Gunicorn interact with NgInx?"") and it helped me to create this:

My site runs on an Ubuntu 18.04 server using Nginx as proxy server. Nginx
determines if the request is for a static asset (a css file for example) or
dynamic one (something served up by the Django App, like answering if Tatis
Jr. has an error today).
If the request is static, then Nginx just gets the static data and server it.
If it’s dynamic data it hands off the request to Gunicorn which then interacts
with the Django App.
So, what actually handles the HTTP request? From the [serverfault.com answer
above](https://serverfault.com/a/331263):
> [T]he simple answer is Gunicorn. The complete answer is both Nginx and
> Gunicorn handle the request. Basically, Nginx will receive the request and
> if it's a dynamic request (generally based on URL patterns) then it will
> give that request to Gunicorn, which will process it, and then return a
> response to Nginx which then forwards the response back to the original
> client.
In my head, I thought that Nginx was ONLY there to handle the static requests
(and it is) but I wasn’t clean on how dynamic requests were handled ... but
drawing this out really made me stop and ask, “Wait, how DOES that actually
work?”
Now I know, and hopefully you do to!
## Notes:
These diagrams are generated using the amazing library
[Diagrams](https://github.com/mingrammer/diagrams ""Diagrams""). The code used
to generate them is
[here](https://github.com/ryancheley/tatis/blob/main/generate_diagram.py).
",2021-05-31,how-does-my-django-site-connect-to-the-internet-anyway,"I created a Django site to troll my cousin Barry who is a big [San Diego
Padres](https://www.mlb.com/padres ""San Diego Padres"") fan. Their Shortstop is
a guy called [Fernando Tatis Jr.](https://www.baseball-
reference.com/players/t/tatisfe02.shtml ""Fernando “Error Maker” Tatis Jr."")
and he’s really good. Like **really** good. He’s also young, and arrogant, and
is everything an old dude like me doesn …
",How does my Django site connect to the internet anyway?,https://www.ryancheley.com/2021/05/31/how-does-my-django-site-connect-to-the-internet-anyway/
ryan,technology,"I [previously wrote](/using-mp4box-to-concatenate-many-h264-files-into-one-
mp4-file-revisited.html) about how I placed my Raspberry Pi above my
hummingbird feeder and added a camera to it to capture video.
Well, the day has finally come where I’ve been able to put my video of it up
on [YouTube](https://youtu.be/_oNlhrZJ-0Y)! It’s totally silly, but it was
satisfying getting it out there for everyone to watch and see.
## Hummingbird Video Capture: Addendum
The code used to generate the the `mp4` file haven’t changed (really). I did
do a couple of things to make it a little easier though.
I have 2 scripts that generate the file and then copy it from the pi to my
MacBook Pro and the clean up:
Script 1 is called `create_script.sh` and looks like this:
(echo '#!/bin/sh'; echo -n ""MP4Box""; array=($(ls *.h264)); for index in ${!array[@]}; do if [ ""$index"" -eq 0 ]; then echo -n "" -add ${array[index]}""; else echo -n "" -cat ${array[index]}""; fi; done; echo -n "" hummingbird.mp4"") > create_mp4.sh | chmod +x create_mp4.sh
This creates a script called `create_mp4.sh` and makes it executable.
This script is called by another script called `run_script.sh` and looks like
this:
./create_script.sh
./create_mp4.sh
scp hummingbird.mp4 ryan@192.168.1.209:/Users/ryan/Desktop/
# Next we remove the video files locally
rm *.h264
rm *.mp4
It runs the `create_script.sh` which creates `create_mpr.sh` and then runs it.
Then I use the `scp` command to copy the `mp4` file that was just created over
to my Mac Book Pro.
As a last bit of housekeeping I clean up the video files.
I’ve added this `run_script.sh` to a cron job that is scheduled to run every
night at midnight.
We’ll see how well it runs tomorrow night!
",2018-04-05,hummingbird-video-capture,"I [previously wrote](/using-mp4box-to-concatenate-many-h264-files-into-one-
mp4-file-revisited.html) about how I placed my Raspberry Pi above my
hummingbird feeder and added a camera to it to capture video.
Well, the day has finally come where I’ve been able to put my video of it up
on [YouTube](https://youtu.be/_oNlhrZJ-0Y)! It’s totally silly, but it was …
",Hummingbird Video Capture,https://www.ryancheley.com/2018/04/05/hummingbird-video-capture/
ryan,technology,"## Building my first Slack Bot
I had added a project to my OmniFocus database in November of 2021 which was,
""Build a Slackbot"" after watching a
[Video](https://www.youtube.com/watch?v=2X8SrKL7E9A) by [Mason
Egger](https://twitter.com/masonegger). I had hoped that I would be able to
spend some time on it over the holidays, but I was never able to really find
the time.
A few weeks ago, [Bob Belderbos](https://twitter.com/bbelderbos) tweeted:
> If you were to build a Slack bot, what would it do?
>
> — Bob Belderbos (@bbelderbos) [February 2,
> 2022](https://twitter.com/bbelderbos/status/1488806429251313666?ref_src=twsrc%5Etfw)
And I responded
> I work in US Healthcare where there are a lot of Acronyms (many of which are
> used in tech but have different meaning), so my slack bot would allow a user
> to enter an acronym and return what it means, i.e., CMS = Centers for
> Medicare and Medicaid Services.
>
> — The B Is Silent (@ryancheley) [February 2,
> 2022](https://twitter.com/ryancheley/status/1488879253911261184?ref_src=twsrc%5Etfw)
I didn't _really_ have anymore time now than I did over the holiday, but Bob
asking and me answering pushed me to _actually_ write the darned thing.
I think one of the problems I encountered was what backend / tech stack to
use. I'm familiar with Django, but going from 0 to something in production has
a few steps and although I know how to do them ... I just felt ~overwhelmed~
by the prospect.
I felt equally ~overwhelmed~ by the prospect of trying FastAPI to create the
API or Flask, because I am not as familiar with their deployment story.
Another thing that was different now than before was that I had worked on a
[Django Cookie Cutter](https://github.com/ryancheley/django-cookiecutter) to
use and that was 'good enough' to try it out. So I did.
I ran into a few [problems](https://github.com/ryancheley/django-
cookiecutter/compare/de07ba6..cd7c272) while working with my Django Cookie
Cutter but I fixed them and then dove head first into writing the Slack Bot
## The model
The initial implementation of the model was very simple ... just 2 fields:
class Acronym(models.Model):
acronym = models.CharField(max_length=8)
definition = models.TextField()
def save(self, *args, **kwargs):
self.acronym = self.acronym.lower()
super(Acronym, self).save(*args, **kwargs)
class Meta:
unique_together = (""acronym"", ""definition"")
ordering = [""acronym""]
def __str__(self) -> str:
return self.acronym
Next I created the API using [Django Rest Framework](https://www.django-rest-
framework.org) using a single `serializer`
class AcronymSerializer(serializers.ModelSerializer):
class Meta:
model = Acronym
fields = [
""id"",
""acronym"",
""definition"",
]
which is used by a single `view`
class AcronymViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = AcronymSerializer
queryset = Acronym.objects.all()
def get_object(self):
queryset = self.filter_queryset(self.get_queryset())
print(self.kwargs[""acronym""])
acronym = self.kwargs[""acronym""]
obj = get_object_or_404(queryset, acronym__iexact=acronym)
return obj
and exposed on 2 end points:
from django.urls import include, path
from .views import AcronymViewSet, AddAcronym, CountAcronyms, Events
app_name = ""api""
user_list = AcronymViewSet.as_view({""get"": ""list""})
user_detail = AcronymViewSet.as_view({""get"": ""retrieve""})
urlpatterns = [
path("""", AcronymViewSet.as_view({""get"": ""list""}), name=""acronym-list""),
path(""/"", AcronymViewSet.as_view({""get"": ""retrieve""}), name=""acronym-detail""),
path(""api-auth/"", include(""rest_framework.urls"", namespace=""rest_framework"")),
]
## Getting the data
At my joby-job we use Jira and Confluence. In one of our Confluence spaces we
have a Glossary page which includes nearly 200 acronyms. I had two choices:
1. Copy and Paste the acronym and definition for each item
2. Use Python to get the data
I used Python to get the data, via a Jupyter Notebook, but I didn't seem to
save the code anywhere (🤦🏻), so I can't include it here. But trust me, it was
💯.
## Setting up the Slack Bot
Although I had watched Mason's video, since I was building this with Django I
used [this article](https://medium.com/freehunch/how-to-build-a-slack-bot-
with-python-using-slack-events-api-django-under-20-minute-code-
included-269c3a9bf64e) as a guide in the development of the code below.
The code from my `views.py` is below:
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
SLACK_VERIFICATION_TOKEN = getattr(settings, ""SLACK_VERIFICATION_TOKEN"", None)
SLACK_BOT_USER_TOKEN = getattr(settings, ""SLACK_BOT_USER_TOKEN"", None)
CONFLUENCE_LINK = getattr(settings, ""CONFLUENCE_LINK"", None)
client = slack.WebClient(SLACK_BOT_USER_TOKEN, ssl=ssl_context)
class Events(APIView):
def post(self, request, *args, **kwargs):
slack_message = request.data
if slack_message.get(""token"") != SLACK_VERIFICATION_TOKEN:
return Response(status=status.HTTP_403_FORBIDDEN)
# verification challenge
if slack_message.get(""type"") == ""url_verification"":
return Response(data=slack_message, status=status.HTTP_200_OK)
# greet bot
if ""event"" in slack_message:
event_message = slack_message.get(""event"")
# ignore bot's own message
if event_message.get(""subtype""):
return Response(status=status.HTTP_200_OK)
# process user's message
user = event_message.get(""user"")
text = event_message.get(""text"")
channel = event_message.get(""channel"")
url = f""https://slackbot.ryancheley.com/api/{text}/""
response = requests.get(url).json()
definition = response.get(""definition"")
if definition:
message = f""The acronym '{text.upper()}' means: {definition}""
else:
confluence = CONFLUENCE_LINK + f'/dosearchsite.action?cql=siteSearch+~+""{text}""'
confluence_link = f""<{confluence}|Confluence>""
message = f""I'm sorry <@{user}> I don't know what *{text.upper()}* is :shrug:. Try checking {confluence_link}.""
if user != ""U031T0UHLH1"":
client.chat_postMessage(
blocks=[{""type"": ""section"", ""text"": {""type"": ""mrkdwn"", ""text"": message}}], channel=channel
)
return Response(status=status.HTTP_200_OK)
return Response(status=status.HTTP_200_OK)
Essentially what the Slack Bot does is takes in the `request.data['text']` and
checks it against the DRF API end point to see if there is a matching Acronym.
If there is, then it returns the acronym and it's definition.
If it's not, you get a message that it's not sure what you're looking for, but
that maybe Confluence1 can help, and gives a link to our Confluence Search
page.
The last thing you'll notice is that if the User has a specific ID it won't
respond with a message. That's because in my initial testing I just had the
Slack Bot replying to the user saying 'Hi' with a 'Hi' back to the user.
I had a missing bit of logic though, so once you said hi to the Slack Bot, it
would reply back 'Hi' and then keep replying 'Hi' because it was talking to
itself. It was comical to see in real time 😂.
## Using ngrok to test it locally
[`ngrok`](https://ngrok.com) is a great tool for taking a local url, like
[localhost:8000/api/entpoint](localhost:8000/api/entpoint), and exposing it on
the internet with a url like
. This allows you to test
your local code and see any issues that might arise when pushed to production.
As I mentioned above the Slack Bot continually said ""Hi"" to itself in my
initial testing. Since I was running ngrok to serve up my local Server I was
able to stop the infinite loop by stopping my local web server. This would
have been a little more challenging if I had to push my code to an actual web
server first and **then** tested.
## Conclusion
This was such a fun project to work on, and I'm really glad that
[Bob](https://twitter.com/bbelderbos) tweeted asking what Slack Bot we would
build.
That gave me the final push to actually build it.
1. You'll notice that I'm using an environment variable to define the Confluence Link and may wonder why. It's mostly to keep the actual Confluence Link used at work non-public and not for any other reason 🤷🏻 ↩︎
",2022-02-19,i-made-a-slackbot,"## Building my first Slack Bot
I had added a project to my OmniFocus database in November of 2021 which was,
""Build a Slackbot"" after watching a
[Video](https://www.youtube.com/watch?v=2X8SrKL7E9A) by [Mason
Egger](https://twitter.com/masonegger). I had hoped that I would be able to
spend some time on it over the holidays, but I was …
",I made a Slackbot!,https://www.ryancheley.com/2022/02/19/i-made-a-slackbot/
ryan,technology,"Since I [switched my blog to
pelican](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/) last summer I've been using [VS
Code](https://code.visualstudio.com) as my writing app. And it's **really**
good for writing, note just code but prose as well.
The one problem I've had is there's no keyboard shortcut for links when
writing in markdown ... at least not a default / native keyboard shortcut.
In other (macOS) writing apps you just select the text and press ⌘+k and boop!
There's a markdown link set up for you. But not so much in VS Code.
I finally got to the point where that was one thing that may have been keeping
me from writing because of how much 'friction' it caused!
So, I decided to figure out how to fix that.
I did have to do a bit of googling and eventually found
[this](https://stackoverflow.com/a/70601782) StackOverflow answer
Essentially the answer is
1. Open the Preferences Page: ⌘+Shift+P
2. Select `Preferences: Open Keyboard Shortcuts (JSON)`
3. Update the `keybindings.json` file to include a new key
The new key looks like this:
{
""key"": ""cmd+k"",
""command"": ""editor.action.insertSnippet"",
""args"": {
""snippet"": ""[${TM_SELECTED_TEXT}]($0)""
},
""when"": ""editorHasSelection && editorLangId == markdown ""
}
Honestly, it's _little_ things like this that can make life so much easier and
more fun. Now I just need to remember to do this on my work computer 😀
",2022-04-08,inserting-a-url-in-markdown-in-vs-code,"Since I [switched my blog to
pelican](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/) last summer I've been using [VS
Code](https://code.visualstudio.com) as my writing app. And it's **really**
good for writing, note just code but prose as well.
The one problem I've had is there's no keyboard shortcut for links when
writing in markdown ... at least not …
",Inserting a URL in Markdown in VS Code,https://www.ryancheley.com/2022/04/08/inserting-a-url-in-markdown-in-vs-code/
ryan,technology,"One of the people I follow online, [Federico Viticci](http://ticci.org), is an
iOS power user, although I would argue that phrase doesn’t really do him
justice. He can make the iPad do things that many people can’t get Macs to do.
Recently he [posted](https://www.macstories.net/linked/in-search-of-the-
perfect-writing-font/) an article on a new font he is using in Ulysses and I
wanted to give it a try. The article says:
> > Installing custsom fonts in Ulysses for iOS is easy: [go to the GitHub
> page](https://github.com/iaolo/iA-Fonts/tree/master/iA%20Writer%20Duospace
> ""iA Writer Duospace""), download each one, and open them in Ulysses (with the
> share sheet) to install them.
Simple enough, but it wasn’t clicking for me. I kept thinking I had done
_something_ wrong. So I thought I’d write up the steps I used so I wouldn’t
forget the next time I need to add a new font.
## Downloading the Font
1. Download the font to somewhere you can get it. I chose to save it to iCloud and use the `Files` app
2. Hit Select in the `Files` app
3. Click `Share`
4. Select `Open in Ulysses`
5. The custom font is now installed and being used.
## Checking the Font:
1. Click the ‘A’ in the writing screen (this is the font selector) located in the upper right hand corner of Ulysses

1. Notice that the Current font indicates it’s a custom font (in This case iA Writer Duospace:

Not that hard, but there’s no feedback telling you that you have been
successful so I wasn’t sure if I had done it or not.
",2017-12-12,installing-fonts-in-ulysses,"One of the people I follow online, [Federico Viticci](http://ticci.org), is an
iOS power user, although I would argue that phrase doesn’t really do him
justice. He can make the iPad do things that many people can’t get Macs to do.
Recently he [posted](https://www.macstories.net/linked/in-search-of-the-
perfect-writing-font/) an article on a new …
",Installing fonts in Ulysses,https://www.ryancheley.com/2017/12/12/installing-fonts-in-ulysses/
ryan,technology,"I read about a cool gis package for Python and decided I wanted to play around
with it. This post isn't about any of the things I've learned about the
package, it's so I can remember how I installed it so I can do it again if I
need to. The package is described by it's author in his
[post](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
To install `osmnx` I needed to do the following:
1. Install [Home Brew](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) if it's not already installed by running this command (as an administrator) in the `terminal`:
> > `/usr/bin/ruby -e ""$(curl -fsSL
> https://raw.githubusercontent.com/Homebrew/install/master/install)""`
2. Use [Home Brew to install the `spatialindex` dependency](https://github.com/kjordahl/SciPy-Tutorial-2015/issues/1). From the `terminal` (again as an administrator):
> > `brew install spatialindex`
3. In python run pip to install `rtree`:
> > `pip install rtree`
4. In python run pip to install `osmnx`
> > `pip install osmnx`
I did this on my 2014 iMac but didn't document the process. This lead to a
problem when I tried to run some code on my 2012 MacBook Pro.
Step 3 may not be required, but I'm **not** sure and I don't want to not have
it written down and then wonder why I can't get `osmnx` to install in 3 years
when I try again!
Remember, you're not going to remember what you did, so you need to write it
down!
",2016-11-24,installing-the-osmnx-package-for-python,"I read about a cool gis package for Python and decided I wanted to play around
with it. This post isn't about any of the things I've learned about the
package, it's so I can remember how I installed it so I can do it again if I
need to …
",Installing the osmnx package for Python,https://www.ryancheley.com/2016/11/24/installing-the-osmnx-package-for-python/
ryan,technology,"May people ask the question ... iPad Pro or MacBook Pro. I decided to really
think about this question and see, what is it that I do with each device.
Initially I thought of each device as being its own ‘thing’. I did these
things on my iPad Pro and those things on my MacBook Pro. But when I really
sat down and thought about it, it turns out that there are things I do
exclusively on my iPad Pro, and other things that I do exclusively on my
MacBook Pro ... but there are also many things that I do on both.
## iPad Pro
There are apps which only run on iOS. Drafts is a perfect example. It’s my
note taking app of choice. Using my iPhone in conjunction with my iPad makes
Drafts one of the most powerful apps I use in the iOS ecosystem.
During meetings I can quickly jot down things that I need to know using my
iPhone and no one notices or cares. Later, I can use my iPad Pro to process
these notes and make sure that everything gets taken care of.
I can also use Drafts as a powerful automation tool to get ideas into
OmniFocus (my To Do App of Choice) easily and without any fuss.
I also use my iPad Pro to process the expenses my family incurs. We use Siri
Shortcuts to take a picture of a receipt which is then saved in a folder in
Dropbox.
I monitor these images and match them up against expenses (or income) in Mint
and categorize the expenses.
This workflow helps to keep me (and my family) in the know about how (and more
importantly where) we’re spending our money.
Mint is available as a web page, and I’ve tried to use macOS and this
workflow, but it simply didn’t work for me.
Using OmniFocus on the iPad is a dream. I am easily able to process my inbox,
perform my weekly review and quickly add new items to do inbox. The ability to
drag and drop with with either Apple Pencil or my finger makes it so easy to
move tasks around.
The other (obvious) use case for my iPad Pro over my MacBook Pro is media
consumption. Everyone says you can’t get real work done on an iPad and they
point to how easy it is to consume media on the iPad, but I think that shows
the opposite.
When you’re ready to take a break from doing real work, the best media
consumption device is the one you have with you 😀
## MacBook Pro
When I really thought about what I use my MacBook Pro for I was ... surprised.
Quite honestly, it’s used mostly to write code (in Python) using my favorite
editor (PyCharm) but other than that ... I don’t do much on it that I can’t do
on my iPad.
When I record podcast (OK, really, just that one and just that one time) I use
my MBP, and if I have a ton of stuff I need to clean up in OmniFocus then I’m
over at the MacBook, but really, it’s doesn’t do anything I can’t do on the
iPad Pro.
Maybe I don’t do real work in the macOS ecosystem?
## What I do on both MacBook Pro and iPad Pro
Honestly, they both do a great job of getting me to where I want to go on the
internet. Some people think that mobile safari isn’t up to it’s macOS
counterpart (and they’re right) but for my (non-coding) needs, it doesn’t
really matter to me. They both work really well for me.
I also tend to use OmniFocus on both when I want to mark things as done, add
new items, or make bulk edits (OF3 on iOS finally made this one a
possibility).
I also use the terminal to access servers via ssh on both platforms. The great
thing about the command line is that it’s mostly the same where ever you’re
coming from.
Terminus on iOS is a a great terminal app and I can just as easily navigate
the server there as I can using the terminal app in macOS.
I’m also just as likely to plan my family’s budget on iOS as I am macOS. It
just kind of depends which device is easier to get to, not what I’m planning
on doing. Excel on both platforms works really well for me (I work in a
Windows environment professionally so Excel is what I use and know for that
kind of thing).
Finally, writing. I use Ulysses on both macOS and iOS and really, I love them
both. Each app has parity with the other so I never feel like I’m losing
something when I write on my MacBook Pro (or on my iPad Pro). Sometimes, it’s
hard to really tell which platform I’m on because they do such a good job (for
me) to make them basically the same.
All in all, I don’t think it’s a question of which to choose, iPad Pro or
MacBook Pro, iOS or macOS ... it’s a matter of what device is closest to me
right now? What device will bring me the most joy to use, right now? What
device do I want to use right now?
iOS or macOS? iPad Pro or MacBook Pro? These aren’t the right questions to be
asking. It should be ... what device do I want to use right now? And don’t
care what anyone else thinks.
",2018-12-01,ipad-versus-macbook-pro,"May people ask the question ... iPad Pro or MacBook Pro. I decided to really
think about this question and see, what is it that I do with each device.
Initially I thought of each device as being its own ‘thing’. I did these
things on my iPad Pro and those …
",iPad versus MacBook Pro,https://www.ryancheley.com/2018/12/01/ipad-versus-macbook-pro/
ryan,technology,"In a [previous post](/mischief-managed/) I had written about an issue I’d had
with upgrading, installing, or just generally maintaining the python package
`psycopg2` ([link](https://www.psycopg.org)).
I ran into that issue again today, and thought to myself, “Hey, I’ve had this
problem before AND wrote something up about it. Let me go see what I did last
time.”
I searched my site for `psycopg2` and tried the solution, but I got the same
[forking](https://thegoodplace.fandom.com/wiki/Censored_Curse_Words) error.
OK … let’s turn to the experts on the internet.
After a while I came across
[this](https://stackoverflow.com/questions/26288042/error-installing-
psycopg2-library-not-found-for-lssl) article on StackOverflow but this
[specific answer](https://stackoverflow.com/a/56146592) helped get me up and
running.
A side effect of all of this is that I upgraded from Python 3.7.5 to Python
3.8.1. I also updated all of my brew packages, and basically did a lot of
cleaning up that I had neglected.
Not how I expected to spend my morning, but productive nonetheless.
",2020-05-03,issues-with-psycopg2-again,"In a [previous post](/mischief-managed/) I had written about an issue I’d had
with upgrading, installing, or just generally maintaining the python package
`psycopg2` ([link](https://www.psycopg.org)).
I ran into that issue again today, and thought to myself, “Hey, I’ve had this
problem before AND wrote something up about it. Let …
",Issues with psycopg2 … again,https://www.ryancheley.com/2020/05/03/issues-with-psycopg2-again/
ryan,technology,"My wife and I **love** baseball season. Specifically we love the
[Dodgers](https://www.mlb.com/dodgers ""Go Dodgers!!!"") and we can’t wait for
Spring Training to begin. In fact, today pitchers and catchers report!
I’ve wanted to do something with the Raspberry Pi Sense Hat that I got (since
I got it) but I’ve struggled to find anything useful. And then I remembered
baseball season and I thought, ‘Hey, what if I wrote something to have the
Sense Hat say “#ITFDB” starting 10 minutes before a Dodgers game started?’
And so I did!
The script itself is relatively straight forward. It reads a csv file and
checks to see if the current time in California is within 10 minutes of start
time of the game. If it is, then it will send a `show_message` command to the
Sense Hat.
I also wrote a cron job to run the script every minute so that I get a
beautiful scrolling bit of text every minute before the Dodgers start!
The code can be found on my [GitHub](https://github.com/ryancheley/itfdb ""Git
Hub"") page in the itfdb repository. There are 3 files:
1. `Program.py` which does the actual running of the script
2. `data_types.py` which defines a class used in `Program.py`
3. `schedule.csv` which is the schedule of the games for 2018 as a csv file.
I ran into a couple of issues along the way. First, my development environment
on my Mac Book Pro was Python 3.6.4 while the Production Environment on the
Raspberry Pi was 3.4. This made it so that the code about time ran locally but
not on the server 🤦♂️.
It took some playing with the code, but I was finally able to go from this
(which worked on 3.6 but not on 3.4):
now = utc_now.astimezone(pytz.timezone(""America/Los_Angeles""))
game_date_time = game_date_time.astimezone(pytz.timezone(""America/Los_Angeles""))
To this which worked on both:
local_tz = pytz.timezone('America/Los_Angeles')
now = utc_now.astimezone(local_tz)
game_date_time = local_tz.localize(game_date_time)
For both, the `game_date_time` variable setting was done in a for loop.
Another issue I ran into was being able to _display_ the message for the sense
hat on my Mac Book Pro. I wasn’t ever able to because of a package that is
missing (RTIMU ) and is apparently only available on Raspbian (the OS on the
Pi).
Finally, in my attempts to get the code I wrote locally to work on the Pi I
decided to install Python 3.6.0 on the Pi (while 3.4 was installed) and seemed
to do nothing but break `pip`. It looks like I’ll be learning how to uninstall
Python 3.4 OR reinstalling everything on the Pi and starting from scratch. Oh
well … at least it’s just a Pi and not a _real_ server.
Although, I’m pretty sure I hosed my [Linode](https://www.linode.com) server a
while back and basically did the same thing so maybe it’s just what I do with
servers when I’m learning.
One final thing. While sitting in the living room watching _DC Legends of
Tomorrow_ the Sense Hat started to display the message. Turns out, I was
accounting for the minute, hour, and day but _NOT_ the month. The Dodgers play
the Cubs on September 12 at 9:35 (according to the schedule.csv file anyway)
and so the conditions to display were met.
I added another condition to make sure it was the right month and now we’re
good to go!
Super pumped for this season with the Dodgers!
",2018-02-13,itfdb,"My wife and I **love** baseball season. Specifically we love the
[Dodgers](https://www.mlb.com/dodgers ""Go Dodgers!!!"") and we can’t wait for
Spring Training to begin. In fact, today pitchers and catchers report!
I’ve wanted to do something with the Raspberry Pi Sense Hat that I got (since
I got it) but I …
",ITFDB!!!,https://www.ryancheley.com/2018/02/13/itfdb/
ryan,technology,"Last Wednesday if you would have asked what I had planned for Easter I would
have said something like, “Going to hide some eggs for my daughter even though
she knows the Easter bunny isn’t real.”
Then suddenly my wife and I were planning on entertaining for 11 family
members. My how things change!
Since I was going to have family over, some of whom are
[Giants](https://www.mlb.com/giants) fans, I wanted to show them the [ITFDB
program I have set up with my
Pi](http://www.ryancheley.com/index.php/2018/02/13/itfdb/).
The only problem is that they would be over at 10am and leave by 2pm while the
game doesn’t start until 5:37pm (Thanks [ESPN](https://www.espn.com)).
To help demonstrate the script I wrote a _demo_ script to display a message on
the Pi and play the Vin Scully mp3.
The Code was simple enough:
from sense_hat import SenseHat
import os
def main():
sense = SenseHat()
message = '#ITFDB!!! The Dodgers will be playing San Francisco at 5:37pm tonight!'
sense.show_message(message, scroll_speed=0.05)
os.system(""omxplayer -b /home/pi/Documents/python_projects/itfdb/dodger_baseball.mp3"")
if __name__ == '__main__':
main()
But then the question becomes, how can I easily launch the script without
[futzing](https://en.wiktionary.org/wiki/futz) with my laptop?
I knew that I could run a shell script for the [Workflow
app](https://workflow.is) on my iPhone with a single action, so I wrote a
simple shell script
python3 ~/Documents/python_projects/itfdb/demo.py
Which was called `itfdb_demo.sh`
And made it executable
chmod u+x itfdb_demo.sh
Finally, I created a WorkFlow which has only one action `Run Script over SSH`
and added it to my home screen so that with a simple tap I could demo the
results.
The WorkFlow looks like this:

Nothing too fancy, but I was able to reliably and easily demonstrate what I
had done. And it was pretty freaking cool!
",2018-04-01,itfdb-demo,"Last Wednesday if you would have asked what I had planned for Easter I would
have said something like, “Going to hide some eggs for my daughter even though
she knows the Easter bunny isn’t real.”
Then suddenly my wife and I were planning on entertaining for 11 family …
",ITFDB Demo,https://www.ryancheley.com/2018/04/01/itfdb-demo/
ryan,technology,"It’s time for Kings Hockey! A couple of years ago Emily and I I decided to be
Hockey fans. This hasn’t really meant anything except that we picked a team
(the Kings) and ‘rooted’ for them (i.e. talked sh*t* to our hockey friends),
looked up their position in the standings, and basically said, “Umm ... yeah,
we’re hockey fans.”
When the 2018 baseball season ended, and with the lack of interest in the NFL
(or the NBA) Emily and I decided to actually focus on the NHL. Step 1 in
becoming a Kings fan is watching the games. To that end we got a subscription
to NHL Center Ice and have committed to watching the games.
Step 2 is getting notified of when the games are on. To accomplish this I
added the games to our family calendar, and decided to use what I learned
writing my [ITFDB](/itfdb/) program and write one for the Kings.
For the Dodgers I had to create a CSV file and read it’s contents.
Fortunately, the NHL as a sweet API that I could use. This also gave me an
opportunity to use an API for the first time!
The API is relatively straight forward and has some really good documentation
so using it wasn’t too challenging.
import requests
from sense_hat import SenseHat
from datetime import datetime
import pytz
from dateutil.relativedelta import relativedelta
def main(team_id):
sense = SenseHat()
local_tz = pytz.timezone('America/Los_Angeles')
utc_now = pytz.utc.localize(datetime.utcnow())
now = utc_now.astimezone(local_tz)
url = 'https://statsapi.web.nhl.com/api/v1/schedule?teamId={}'.format(team_id)
r = requests.get(url)
total_games = r.json().get('totalGames')
for i in range(total_games):
game_time = (r.json().get('dates')[i].get('games')[0].get('gameDate'))
away_team = (r.json().get('dates')[i].get('games')[0].get('teams').get('away').get('team').get('name'))
home_team = (r.json().get('dates')[i].get('games')[0].get('teams').get('home').get('team').get('name'))
away_team_id = (r.json().get('dates')[i].get('games')[0].get('teams').get('away').get('team').get('id'))
home_team_id = (r.json().get('dates')[i].get('games')[0].get('teams').get('home').get('team').get('id'))
game_time = datetime.strptime(game_time, '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=pytz.utc).astimezone(local_tz)
minute_diff = relativedelta(now, game_time).minutes
hour_diff = relativedelta(now, game_time).hours
day_diff = relativedelta(now, game_time).days
month_diff = relativedelta(now, game_time).months
game_time_hour = str(game_time.hour)
game_time_minute = '0'+str(game_time.minute)
game_time = game_time_hour+"":""+game_time_minute[-2:]
away_record = return_record(away_team_id)
home_record = return_record(home_team_id)
if month_diff == 0 and day_diff == 0 and hour_diff == 0 and 0 >= minute_diff >= -10:
if home_team_id == team_id:
msg = 'The {} ({}) will be playing the {} ({}) at {}'.format(home_team, home_record, away_team, away_record ,game_time)
else:
msg = 'The {} ({}) will be playing at the {} ({}) at {}'.format(home_team, home_record, away_team, away_record ,game_time)
sense.show_message(msg, scroll_speed=0.05)
def return_record(team_id):
standings_url = 'https://statsapi.web.nhl.com/api/v1/teams/{}/stats'.format(team_id)
r = requests.get(standings_url)
wins = (r.json().get('stats')[0].get('splits')[0].get('stat').get('wins'))
losses = (r.json().get('stats')[0].get('splits')[0].get('stat').get('losses'))
otl = (r.json().get('stats')[0].get('splits')[0].get('stat').get('ot'))
record = str(wins)+'-'+str(losses)+'-'+str(otl)
return record
if __name__ == '__main__':
main(26) # This is the code for the LA Kings; the ID can be found here: https://statsapi.web.nhl.com/api/v1/teams/
The part that was the most interesting for me was getting the opponent name
and then the record for both the opponent and the Kings. Since this is live
data it allows the records to be updated which I couldn’t do (easily) with the
Dodgers programs (hey MLB ... anytime you want to have a free API I’m ready!).
Anyway, it was super fun and on November 6 I had the opportunity to actually
see it work:
``
I really like doing fun little projects like this.
",2018-11-09,itfkh,"It’s time for Kings Hockey! A couple of years ago Emily and I I decided to be
Hockey fans. This hasn’t really meant anything except that we picked a team
(the Kings) and ‘rooted’ for them (i.e. talked sh*t* to our hockey friends),
looked up their …
",ITFKH!!!,https://www.ryancheley.com/2018/11/09/itfkh/
ryan,technology,"Sometimes the internet is a horrible, awful, ugly thing. And then other times,
it’s exactly what you need.
I have 2 Raspberry Pi each with different versions of Python. One running
python 3.4.2 and the other running Python 3.5.3. I have previously tried to
upgrade the version of the Pi running 3.5.3 to a more recent version (in this
case 3.6.1) and read 10s of articles on how to do it. It did not go well.
Parts seemed to have worked, while others didn’t. I have 3.6.1 installed, but
in order to run it I have to issue the command `python3.6` which is _fine_ but
not really what I was looking for.
For whatever reason, although I do nearly all of my Python development on my
Mac, it hadn’t occurred to me to upgrade Python there until last night.
With a simple Google search the first result came to Stackoverflow (what
else?) and [this](https://apple.stackexchange.com/questions/201612/keeping-
python-3-up-to-date-on-a-mac) answer.
brew update
brew upgrade python3
Sometimes things on a Mac do ‘just work’. This was one of those times.
I’m now running Python 3.7.1 and I’ll I needed to do was a simple command in
the terminal.
God bless the internet.
",2018-12-22,keeping-python-up-to-date-on-macos,"Sometimes the internet is a horrible, awful, ugly thing. And then other times,
it’s exactly what you need.
I have 2 Raspberry Pi each with different versions of Python. One running
python 3.4.2 and the other running Python 3.5.3. I have previously tried to
upgrade …
",Keeping Python up to date on macOS,https://www.ryancheley.com/2018/12/22/keeping-python-up-to-date-on-macos/
ryan,technology,"Per the [Django
Documentation](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-
ADMINS) you can set up
> A list of all the people who get code error notifications. When DEBUG=False
> and AdminEmailHandler is configured in LOGGING (done by default), Django
> emails these people the details of exceptions raised in the request/response
> cycle.
In order to set this up you need to include in your `settings.py` file
something like:
ADMINS = [
('John', 'john@example.com'),
('Mary', 'mary@example.com')
]
The difficulties I always ran into were:
1. How to set up the AdminEmailHandler
2. How to set up a way to actually email from the Django Server
Again, per the [Django
Documentation](https://docs.djangoproject.com/en/3.1/topics/logging/#django.utils.log.AdminEmailHandler
""AdminEmailHandler""):
> Django provides one log handler in addition to those provided by the Python
> logging module
Reading through the documentation didn’t **really** help me all that much. The
docs show the following example:
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
}
},
That’s great, but there’s not a direct link (that I could find) to the example
of how to configure the logging in that section. It is instead at the **VERY**
bottom of the documentation page in the Contents section in the [Configured
logging >
Examples](https://docs.djangoproject.com/en/3.1/topics/logging/#configuring-
logging) section ... and you _really_ need to know that you have to look for
it!
The important thing to do is to include the above in the appropriate `LOGGING`
setting, like this:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html': True,
}
},
},
}
## Sending an email with Logging information
We’ve got the logging and it will be sent via email, but there’s no way for
the email to get sent out yet!
In order to accomplish this I use [SendGrid](https://sendgrid.com ""SendGrid"").
No real reason other than that’s what I’ve used in the past.
There are [great tutorials online](https://sendgrid.com/docs/for-
developers/sending-email/django/ ""Django and SendGrid Tutorials"") for how to
get SendGrid integrated with Django, so I won’t rehash that here. I’ll just
drop my the settings I used in my `settings.py`
SENDGRID_API_KEY = env(""SENDGRID_API_KEY"")
EMAIL_HOST = ""smtp.sendgrid.net""
EMAIL_HOST_USER = ""apikey""
EMAIL_HOST_PASSWORD = SENDGRID_API_KEY
EMAIL_PORT = 587
EMAIL_USE_TLS = True
One final thing I needed to do was to update the email address that was being
used to send the email. By default it uses `root@localhost` which isn’t ideal.
You can override this by setting
SERVER_EMAIL = myemail@mydomain.tld
With those three settings, everything should just work.
",2020-10-21,logging-in-a-django-app,"Per the [Django
Documentation](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-
ADMINS) you can set up
> A list of all the people who get code error notifications. When DEBUG=False
> and AdminEmailHandler is configured in LOGGING (done by default), Django
> emails these people the details of exceptions raised in the request/response
> cycle.
In order to set this …
",Logging in a Django App,https://www.ryancheley.com/2020/10/21/logging-in-a-django-app/
ryan,technology,"# Logging
Last year I worked on an update to the package
[tryceratops](https://pypi.org/project/tryceratops/) with [Gui
Latrova](https://twitter.com/guilatrova) to include a verbose flag for
logging.
Honestly, Gui was a huge help and I wrote about my experience
[here](\[link\]\(https://www.ryancheley.com/2021/08/07/contributing-to-
tryceratops/\)) but I didn't really understand why what I did worked.
Recently I decided that I wanted to better understand logging so I dove into
some posts from Gui, and sat down and read the documentation on the logging
from the standard library.
My goal with this was to (1) be able to use logging in my projects, and (2)
write something that may be able to help others.
Full disclosure, Gui has a **really** [good article explaining
logging](https://guicommits.com/how-to-log-in-python-like-a-pro/) and I think
everyone should read it. My notes below are a synthesis of his article, my
understanding of the [documentation from the standard
library](https://docs.python.org/3/library/logging.html), and the [Python
HowTo](https://docs.python.org/3/howto/logging.html) written in a way to
answer the [Five W questions](https://www.education.com/game/five-ws-song/) I
was taught in grade school.
## The Five W's
**Who are the generated logs for?**
Anyone trying to troubleshoot an issue, or monitor the history of actions that
have been logged in an application.
**What is written to the log?**
The [formatter](https://docs.python.org/3/library/logging.html#formatter-
objects) determines what to display or store.
**When is data written to the log?**
The [logging level](https://docs.python.org/3/library/logging.html#logging-
levels) determines when to log the issue.
**Where is the log data sent to?**
The [handler](https://docs.python.org/3/library/logging.html#handler-objects)
determines where to send the log data whether that's a file, or stdout.
**Why would I want to use logging?**
To keep a history of actions taken during your code.
**How is the data sent to the log?**
The [loggers](https://docs.python.org/3/library/logging.html#logger-objects)
determine how to bundle all of it together through calls to various methods.
## Examples
Let's say I want a logger called `my_app_errors` that captures all ERROR level
incidents and higher to a file and to tell me the date time, level, message,
logger name, and give a trace back of the error, I could do the following:
import logging
message='oh no! an error occurred'
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s - %(name)s')
logger = logging.getLogger('my_app_errors')
fh = logging.FileHandler('errors.log')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.error(message, stack_info=True)
The code above would generate something like this to a file called
`errors.log`
2022-03-28 19:45:49,188 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
File ""/Users/ryan/Documents/github/logging/test.py"", line 9, in
logger.error(message, stack_info=True)
If I want a logger that will do all of the above AND output debug information
to the console I could:
import logging
message='oh no! an error occurred'
logger = logging.getLogger('my_app_errors')
ch = logging.StreamHandler()
fh = logging.FileHandler('errors.log')
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s - %(name)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
logger.addHandler(fh)
logger.addHandler(ch)
logger.error(message, stack_info=True)
logger.debug(message, stack_info=True)
Again, the code above would generate something like this to a file called
`errors.log`
2022-03-28 19:45:09,406 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
File ""/Users/ryan/Documents/github/logging/test.py"", line 18, in
logger.error(message, stack_info=True)
but it would also output to stderr in the terminal something like this:
2022-03-27 13:18:45,367 - ERROR - oh no! an error occurred - my_app_errors
Stack (most recent call last):
File """", line 1, in
The above it a bit hard to scale though. What happens when we want to have
multiple formatters, for different levels that get output to different places?
We can incorporate all of that into something like what we see above, OR, we
can stat to leverage the use of logging configuration files.
Why would we want to have multiple formatters? Perhaps the DevOps team wants
robust logging messages on anything `ERROR` and above, but the application
team wants to have `INFO` and above in a rotating file name schema, while the
QA team needs to have the `DEBUG` and up output to standard out.
You CAN do all of this inline with the code above, but would you **really**
want to? Probably not.
Enter configuration files to allow easier management of log files (and a
potential way to make everyone happy) which I'll cover in the next post.
",2022-03-30,logging-part-1,"# Logging
Last year I worked on an update to the package
[tryceratops](https://pypi.org/project/tryceratops/) with [Gui
Latrova](https://twitter.com/guilatrova) to include a verbose flag for
logging.
Honestly, Gui was a huge help and I wrote about my experience
[here](\[link\]\(https://www.ryancheley.com/2021/08/07/contributing-to-
tryceratops/\)) but I didn't really understand why what I did worked.
Recently I decided that I …
",Logging Part 1,https://www.ryancheley.com/2022/03/30/logging-part-1/
ryan,technology,"In my [previous post](https://www.ryancheley.com/2022/03/30/logging-part-1/) I
wrote about inline logging, that is, using logging in the code without a
configuration file of some kind.
In this post I'm going to go over setting up a configuration file to support
the various different needs you may have for logging.
Previously I mentioned this scenario:
> Perhaps the DevOps team wants robust logging messages on anything `ERROR`
> and above, but the application team wants to have `INFO` and above in a
> rotating file name schema, while the QA team needs to have the `DEBUG` and
> up output to standard out.
Before we get into how we may implement something like what's above, let's
review the parts of the Logger which are:
* [formatters](https://docs.python.org/3/library/logging.html#formatter-objects)
* [handlers](https://docs.python.org/3/library/logging.html#handler-objects)
* [loggers](https://docs.python.org/3/library/logging.html#logger-objects)
## Formatters
In a logging configuration file you can have multiple formatters specified.
The above example doesn't state WHAT each team need, so let's define it here:
* DevOps: They need to know **when** the error occurred, what the **level** was, and what **module** the error came from
* Application Team: They need to know **when** the error occurred, the **level** , what **module** and **line**
* The QA Team: They need to know when the error occurred, the **level** , what **module** and **line** , and they need a **stack trace**
For the Devops Team we can define a formatter as such1:
'%(asctime)s - %(levelname)s - %(module)s'
The Application team would have a formatter like this:
'%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
while the QA team would have one like this:
'%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
## Handlers
The Handler controls _where_ the data from the log is going to be sent. There
are several kinds of handlers, but based on our requirements above, we'll only
be looking at three of them (see the
[documentation](https://docs.python.org/3/howto/logging.html#useful-handlers)
for more types of handlers)
From the example above we know that the DevOps team wants to save the output
to a file, while the Application Team wants to have the log data saved in a
way that allows the log files to not get **too** big. Finally, we know that
the QA team wants the output to go directly to `stdout`
We can handle all of these requirements via the handlers. In this case, we'd
use
* [FileHandler](https://docs.python.org/3/library/logging.handlers.html#filehandler) for the DevOps team
* [RotatingFileHandler](https://docs.python.org/3/library/logging.handlers.html#rotatingfilehandler) for the Application team
* [StreamHandler](https://docs.python.org/3/library/logging.handlers.html#streamhandler) for the QA team
## Configuration File
Above we defined the formatter and handler. Now we start to put them together.
The basic format of a logging configuration has 3 parts (as described above).
The example I use below is `YAML`, but a dictionary or a `conf` file would
also work.
Below we see five keys in our `YAML` file:
version: 1
formatters:
handlers:
loggers:
root:
level:
handlers:
The `version` key is to allow for future versions in case any are introduced.
As of this writing, there is only 1 version ... and it's `version: 1`
### Formatters
We defined the formatters above so let's add them here and give them names
that map to the teams
version: 1
formatters:
devops:
format: '%(asctime)s - %(levelname)s - %(module)s'
application:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
qa:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
Right off the bat we can see that the formatters for `application` and `qa`
are the same, so we can either keep them separate to help allow for easier
updates in the future (and to be more explicit) OR we can merge them into a
single formatter to adhere to `DRY` principals.
I'm choosing to go with option 1 and keep them separate.
### Handlers
Next, we add our handlers. Again, we give them names to map to the team. There
are several keys for the handlers that are specific to the type of handler
that is used. For each handler we set a level (which will map to the level
from the specs above).
Additionally, each handler has keys associated based on the type of handler
selected. For example, `logging.FileHandler` needs to have the filename
specified, while `logging.StreamHandler` needs to specify where to output to.
When using `logging.handlers.RotatingFileHandler` we have to specify a few
more items in addition to a filename so the logger knows how and when to
rotate the log writing.
version: 1
formatters:
devops:
format: '%(asctime)s - %(levelname)s - %(module)s'
application:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
qa:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
handlers:
devops:
class: logging.FileHandler
level: ERROR
filename: 'devops.log'
application:
class: logging.handlers.RotatingFileHandler
level: INFO
filename: 'application.log'
mode: 'a'
maxBytes: 10000
backupCount: 3
qa:
class: logging.StreamHandler
level: DEBUG
stream: ext://sys.stdout
What the setup above does for the `devops` handler is to output the log data
to a file called `devops.log`, while the `application` handler outputs to a
rotating set of files called `application.log`. For the `application.log` it
will hold a maximum of 10,000 bytes. Once the file is 'full' it will create a
new file called `application.log.1`, copy the contents of `application.log`
and then clear out the contents of `application.log` to start over. It will do
this 3 times, giving the application team the following files:
* application.log
* application.log.1
* application.log.2
Finally, the handler for QA will output directly to stdout.
### Loggers
Now we can take all of the work we did above to create the `formatters` and
`handlers` and use them in the `loggers`!
Below we see how the loggers are set up in configuration file. It seems a
_bit_ redundant because I've named my formatters, handlers, and loggers all
matching terms, but 🤷♂️
The only new thing we see in the configuration below is the new `propagate:
no` for each of the loggers. If there were parent loggers (we don't have any)
then this would prevent the logging information from being sent 'up' the chain
to parent loggers.
The [documentation](https://docs.python.org/3/howto/logging.html#logging-flow)
has a good diagram showing the workflow for how the `propagate` works.
Below we can see what the final, fully formed logging configuration looks
like.
version: 1
formatters:
devops:
format: '%(asctime)s - %(levelname)s - %(module)s'
application:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
qa:
format: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s'
handlers:
devops:
class: logging.FileHandler
level: ERROR
filename: 'devops.log'
application:
class: logging.handlers.RotatingFileHandler
level: INFO
filename: 'application.log'
mode: 'a'
maxBytes: 10000
backupCount: 3
qa:
class: logging.StreamHandler
level: DEBUG
stream: ext://sys.stdout
loggers:
devops:
level: ERROR
formatter: devops
handlers: [devops]
propagate: no
application:
level: INFO
formatter: application
handlers: [application]
propagate: no
qa:
level: DEBUG
formatter: qa
handlers: [qa]
propagate: no
root:
level: ERROR
handlers: [devops, application, qa]
In my next post I'll write about how to use the above configuration file to
allow the various teams to get the log output they need.
1. full documentation on what is available for the formatters can be found here: https://docs.python.org/3/library/logging.html#logrecord-attributes ↩︎
",2022-04-07,logging-part-2,"In my [previous post](https://www.ryancheley.com/2022/03/30/logging-part-1/) I
wrote about inline logging, that is, using logging in the code without a
configuration file of some kind.
In this post I'm going to go over setting up a configuration file to support
the various different needs you may have for logging.
Previously I mentioned …
",Logging Part 2,https://www.ryancheley.com/2022/04/07/logging-part-2/
ryan,technology,"I'm a big fan of [podcasts](http://www.ryancheley.com/podcasts-i-like/). I've
been listening to them for 4 or 5 years now. One of my favorite Podcast
Networks, [Relay](http://www.relay.fm) just had their second anniversary. They
offer memberships and after listening to hours and hours of _All The Great
Shows_ I decided that I needed to become a
[member](https://www.relay.fm/membership).
One of the awesome perks of [Relay](http://www.relay.fm) membership is a set
of **Amazing** background images.
This is fortuitous as I've been looking for some good backgrounds for my iMac,
and so it seemed like a perfect fit.
On my iMac I have several `spaces` configured. One for `Writing`, one for
`Podcast` and one for everything else. I wanted to take the backgrounds from
Relay and have them on the `Writing` space and the `Podcasting` space, but I
also wanted to be able to distinguish between them. One thing I could try to
do would be to open up an image editor (Like
[Photoshop](http://www.photoshop.com),
[Pixelmater](http://www.pixelmator.com/pro/) or
[Acorn](https://flyingmeat.com/acorn/)) and add text to them one at a time
(although I'm sure there is a way to script them) but I decided to see if I
could do it using Python.
Turns out, I can.
This code will take the background images from my `/Users/Ryan/Relay 5K
Backgrounds/` directory and spit them out into a subdirectory called
`Podcasting`
from PIL import Image, ImageStat, ImageFont, ImageDraw
from os import listdir
from os.path import isfile, join
# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype(""~/Library/Fonts/Inconsolata.otf"", TextFontSize)
mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
onlyfiles.remove('.DS_Store')
rows = len(onlyfiles)
for i in range(rows):
img = Image.open(mypath+onlyfiles[i])
width, height = img.size
draw = ImageDraw.Draw(img)
TextXPos = 0.6 * width
TextYPos = 0.85 * height
draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
draw.text
img.save('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i])
print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+onlyfiles[i]+' successfully saved!')
This was great, but it included all of the images, and some of them are
_really_ bright. I mean, like _really_ bright.
So I decided to use [something I learned while helping my daughter with her
Science Project last year](http://www.ryancheley.com/blog/2016/12/17/its-
science) and determine the brightness of the images and use only the dark
ones.
This lead me to update the code to this:
from PIL import Image, ImageStat, ImageFont, ImageDraw
from os import listdir
from os.path import isfile, join
def brightness01( im_file ):
im = Image.open(im_file).convert('L')
stat = ImageStat.Stat(im)
return stat.mean[0]
# Declare Text Attributes
TextFontSize = 400
TextFontColor = (128,128,128)
font = ImageFont.truetype(""~/Library/Fonts/Inconsolata.otf"", TextFontSize)
mypath = '/Users/Ryan/Relay 5K Backgrounds/'
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
onlyfiles.remove('.DS_Store')
darkimages = []
rows = len(onlyfiles)
for i in range(rows):
if brightness01(mypath+onlyfiles[i]) <= 65:
darkimages.append(onlyfiles[i])
darkimagesrows = len(darkimages)
for i in range(darkimagesrows):
img = Image.open(mypath+darkimages[i])
width, height = img.size
draw = ImageDraw.Draw(img)
TextXPos = 0.6 * width
TextYPos = 0.85 * height
draw.text((TextXPos, TextYPos),'Podcasting',TextFontColor,font=font)
draw.text
img.save('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i])
print('/Users/Ryan/Relay 5K Backgrounds/Podcasting/'+darkimages[i]+' successfully saved!')
I also wanted to have backgrounds generated for my **Writing** space, so I
tacked on this code:
for i in range(darkimagesrows):
img = Image.open(mypath+darkimages[i])
width, height = img.size
draw = ImageDraw.Draw(img)
TextXPos = 0.72 * width
TextYPos = 0.85 * height
draw.text((TextXPos, TextYPos),'Writing',TextFontColor,font=font)
draw.text
img.save('/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i])
print('/Users/Ryan/Relay 5K Backgrounds/Writing/'+darkimages[i]+' successfully saved!')
The `print` statements at the end of the `for` loops were so that I could tell
that something was actually happening. The images were VERY large (close to
10MB for each one) so the `PIL` library was taking some time to process the
data and I was concerned that something had frozen / stopped working
This was a pretty straightforward project, but it was pretty fun. It allowed
me to go from this:

To this:

For the text attributes I had to play around with them for a while until I
found the color, font and font size that I liked and looked good (to me).
The Positioning of the text also took a bit of experimentation, but a little
trial and error and I was all set.
Also, for the `brightness` level of 65 I just looked at the images that seemed
to work and found a threshold to use. The actual value may vary depending on
the look you're doing for.
",2017-09-17,making-background-images,"I'm a big fan of [podcasts](http://www.ryancheley.com/podcasts-i-like/). I've
been listening to them for 4 or 5 years now. One of my favorite Podcast
Networks, [Relay](http://www.relay.fm) just had their second anniversary. They
offer memberships and after listening to hours and hours of _All The Great
Shows_ I decided that I needed to …
",Making Background Images,https://www.ryancheley.com/2017/09/17/making-background-images/
ryan,technology,"Logging into a remote server is a drag. Needing to remember the password (or
get it from [1Password](https://1password.com)); needing to remember the IP
address of the remote server. Ugh.
It’d be so much easier if I could just
ssh username@servername
and get into the server.
And it turns out, you can. You just need to do two simple things.
## Simple thing the first: Update the `hosts` file on your local computer to
map the IP address to a memorable name.
The `hosts` file is located at `/etc/hosts` (at least on *nix based systems).
Go to the hosts file in your favorite editor … my current favorite editor for
simple stuff like this is vim.
Once there, add the IP address you don’t want to have to remember, and then a
name that you will remember. For example:
67.176.220.115 easytoremembername
One thing to keep in mind, you’ll already have some entries in this file.
Don’t mess with them. Leave them there. Seriously … it’ll be better for
everyone if you do.
## Simple thing the second: Generate a public-private key and share the public
key with the remote server
From the terminal run the command `ssh-keygen -t rsa`. This will generate a
public and private key. You will be asked for a location to save the keys to.
The default (on MacOS) is `/Users/username/.ssh/id_rsa`. I tend to accept the
default (no reason not to) and leave the passphrase blank (this means you
won’t have to enter a password which is what we’re looking for in the first
place!)
Next, we copy the public key to the host(s) you want to access using the
command
ssh-copy-id @
for example:
ssh-copy-id pi@rpicamera
The first time you do this you will get a message asking you if you’re sure
you want to do this. Type in `yes` and you’re good to go.
One thing to note, doing this updates the file `known_hosts`. If, for some
reason, the server you are ssh-ing to needs to be rebuilt (i.e. you have to
keep destroying your Digital Ocean Ubuntu server because you can’t get the
static files to be served properly for your Django project) then you need to
go to the `known_hosts` file and remove the entry for that known host.
When you do that you’ll be asked about the identity of the server (again).
Just say yes and you’re good to go.
If you forget that step then when you try to ssh into the server you get a
nasty looking error message saying that the server identities don’t match and
you can’t proceed.
",2018-05-05,making-it-easy-to-ssh-into-a-remote-server,"Logging into a remote server is a drag. Needing to remember the password (or
get it from [1Password](https://1password.com)); needing to remember the IP
address of the remote server. Ugh.
It’d be so much easier if I could just
ssh username@servername
and get into the server.
And it turns …
",Making it easy to ssh into a remote server,https://www.ryancheley.com/2018/05/05/making-it-easy-to-ssh-into-a-remote-server/
ryan,technology,"I recently got a new raspberry pi (yes, I might have a problem) and wanted to
be able to ssh into it without having to remember the IP or password. Luckily
I wrote [this helpful post](/making-it-easy-to-ssh-into-a-remote-server.html)
several months ago.
While it go me most of the way there, I did run into a slight issue.
## First Issue
The issue was that I had a typo for the command to generate a key. I had:
`ssh-keyken -t rsa`
Which should have been:
`ssh-keygen -t rsa`
When I copied and pasted the original command the terminal said there was no
such command. 🤦♂️
## Second Issue
Once that go cleared up I went through the steps and was able to get
everything set up. Or so I thought. On attempting to ssh into my new pi I was
greeted with a password prompt. WTF?
The first thing I did was to check to see what keys were in my \~/.ssh folder.
Sure enough there were a couple of them in there.
ls ~/.ssh
id_rsa id_rsa.github id_rsa.github.pub id_rsa.pub known_hosts read_only_key read_only_key.pub
Next, I interrogated the help command for `ssh-copy-id` to see what flags were
available.
Usage: /usr/bin/ssh-copy-id [-h|-?|-f|-n] [-i [identity_file]] [-p port] [[-o ] ...] [user@]hostname
-f: force mode -- copy keys without trying to check if they are already installed
-n: dry run -- no keys are actually copied
-h|-?: print this help
I figured let’s try the `-n` flag and get the output from that. Doing so gave
me
ryan@Ryans-MBP:~/Desktop$ ssh-copy-id -n pi@newpi
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ""/Users/ryan/.ssh/id_rsa.github.pub""
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
OK … why is it sending the GitHub key? That’s a different problem for a
different time. I see another flag available is the `-i` which will allow me
to specify which key I want to send. Aha!
OK, now all that I need to do is use the following command to test the output:
ssh-copy-id -n -i ~/.ssh/id_rsa.pub pi@newpi
And sure enough it’s sending the correct key
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ""/Users/ryan/.ssh/id_rsa.pub""
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
Remove the `-n` flag to send it for real
ssh-copy-id -i ~/.ssh/id_rsa.pub pi@newpi
And try to ssh in again
ssh pi@newpi
Success!
I wanted to write this up for 2 reasons:
1. So I can refer back to it if I ever need to. This blog is mostly for me to write down technical things that I do so I can remember them later on
2. This is the first time I’ve run into an issue with a command like tool and simply used the help to figure out how to fix the problem and I wanted to memorialize that. It felt [forking](https://thegoodplace.fandom.com/wiki/Censored_Curse_Words) awesome to do that.
Footnote: Yes … calling my new raspberry pi `newpi` in my hosts file is dumb.
Yes, when I get my next new Raspberry Pi I will be wondering what to call it.
YEs, I am going to try and remember to make the change before it happens so
that I don’t end up with the next Pi being called `newnewpi` and the one after
that being `newnewnewpi`
",2019-03-25,making-it-easy-to-ssh-into-a-remote-server-addendum,"I recently got a new raspberry pi (yes, I might have a problem) and wanted to
be able to ssh into it without having to remember the IP or password. Luckily
I wrote [this helpful post](/making-it-easy-to-ssh-into-a-remote-server.html)
several months ago.
While it go me most of the way there, I did …
",Making it easy to ssh into a remote server: Addendum,https://www.ryancheley.com/2019/03/25/making-it-easy-to-ssh-into-a-remote-server-addendum/
ryan,technology,"On Tuesday October 29 I worked with [Oliver
Andrich](https://github.com/oliverandrich/), [Daniel
Moran](https://github.com/cunla/) and [Storm Heg](https://github.com/Stormheg)
to migrate Oliver's project [django-tailwind-cli](https://github.com/django-
commons/django-tailwind-cli) from Oliver's GitHub project to Django Commons.
This was the 5th library that has been migrated over, but the first one that I
'lead'. I was a bit nervous. The Django Commons docs are great and super
helpful, but the first time you do something, it can be nerve wracking.
One thing that was super helpful was knowing that Daniel and Storm were there
to help me out when any issues came up.
The first set up steps are pretty straight forward and we were able to get
through them pretty quickly. Then we ran into an issue that none of us had
seen previously.
`django-tailwind-cli` had initially set up GitHub Pages set up for the docs,
but migrated to use [Read the Docs](https://about.readthedocs.com/). However,
the GitHub pages were still set in the repo so when we tried to migrate them
over we ran into an error. Apparently you can't remove GitHub pages using
Terraform (the process that we use to manage the organization).
We spent a few minutes trying to parse the error, make some changes, and try
again (and again) and we were able to finally successfully get the migration
completed 🎉
Some other things that came up during the migration was a maintainer that was
set in the front end, but not in the terraform file. Also, while I was making
changes to the Terraform file locally I ran into an issue with an update that
had been done in the GitHub UI on my branch which caused a conflict for me
locally.
I've had to deal with this kind of thing before, but ... never with an
audience! Trying to work through the issue was a bit stressful to say the
least 😅
But, with the help of Daniel and Storm I was able to resolve the conflicts and
get the code pushed up.
As of this writing we have [6 libraries](https://github.com/orgs/django-
commons/repositories?type=source&q=language%3APython+-topic%3Atemplate) that
are part of the Django Commons organization and am really excited for the next
time that I get to lead a migration. Who knows, at some point I might actually
be able to do one on my own ... although our hope is that this can be
automated much more ... so maybe that's what I can work on next
Working on a project like this has been really great. There are such great
opportunities to learn various technologies (terraform, GitHub Actions, git)
and getting to work with great collaborators.
What I'm hoping to be able to work on this coming weekend is1:
1. Get a better understanding of Terraform and how to use it with GitHub
2. Use Terraform to do something with GitHub Actions
3. Try and create a merge conflict and then use the git cli, or Git Tower, or VS Code to resolve the merge conflict
For number 3 in particular I want to have more comfort for fixing those kinds
of issues so that if / when they come up again I can resolve them.
1. Now will I actually be able to 🤷🏻 ↩︎
",2024-11-20,migrating-django-tailwind-cli-to-django-commons,"On Tuesday October 29 I worked with [Oliver
Andrich](https://github.com/oliverandrich/), [Daniel
Moran](https://github.com/cunla/) and [Storm Heg](https://github.com/Stormheg)
to migrate Oliver's project [django-tailwind-cli](https://github.com/django-
commons/django-tailwind-cli) from Oliver's GitHub project to Django Commons.
This was the 5th library that has been migrated over, but the first one that I
'lead'. I was a bit nervous. The Django …
",Migrating django-tailwind-cli to Django Commons,https://www.ryancheley.com/2024/11/20/migrating-django-tailwind-cli-to-django-commons/
ryan,technology,"## A little back story
In October of 2017 I [wrote about how I migrated from SquareSpace to
Wordpress](https://www.ryancheley.com/2017/10/01/migrating-from-square-space-
to-word-press/). After almost 4 years I’ve decided to migrate again, this time
to [Pelican](https://blog.getpelican.com). I did a bit of work with Pelican
during my [100 Days of Web Code](https://www.ryancheley.com/2019/08/31/my-
first-project-after-completing-the-100-days-of-web-in-python/) back in 2019.
A good question to ask is, “why migrate to a new platform” The answer, is that
while writing my post [Debugging Setting up a Django
Project](https://www.ryancheley.com/2021/06/13/debugging-setting-up-a-django-
project/) I had to go back and make a change. It was the first time I’d ever
had to use the WordPress Admin to write anything ... and it was awful.
My writing and posting workflow involves [Ulysses](https://ulysses.app) where
I write everything in MarkDown. Having to use the WYSIWIG interface and the
‘blocks’ in WordPress just broke my brain. That meant what should have been a
slight tweak ended up taking me like 45 minutes.
I decided to give Pelican a shot in a local environment to see how it worked.
And it turned out to work very well for my brain and my writing style.
## Setting it up
I set up a local instance of Pelican using the [Quick
Start](https://docs.getpelican.com/en/latest/quickstart.html ""Quick Start"")
guide in the docs.
Pelican has a CLI utility that converts the xml into Markdown files. This
allowed me to export my Wordpress blog content to it’s XML output and save it
in the Pelican directory I created.
I then ran the command:
pelican-import --wp-attach -o ./content ./wordpress.xml
This created about 140 .md files
Next, I ran a few `Pelican` commands to generate the output:
pelican content
and then the local web server:
pelican --listen
I reviewed the page and realized there was a bit of clean up that needed to be
done. I had categories of Blog posts that only had 1 article, and were really
just a different category that needed to be tagged appropriately. So, I made
some updates to the categorization and tagging of the posts.
I also had some broken links I wanted to clean up so I took the opportunity to
check the links on all of the pages and make fixes where needed. I used the
library [LinkChecker](https://pypi.org/project/LinkChecker/) which made the
process super easy. It is a CLI that generates HTML that you can then review.
Pretty neat.
## Deploying to a test server
The first thing to do was to update my DNS for a new subdomain to point to my
UAT server. I use Hover and so it was pretty easy to add the new entry.
I set uat.ryancheley.com to the IP Address 178.128.188.134
Next, in order to have UAT serve requests for my new site I need to have a
configuration file for Nginx. This
[post](https://michael.lustfield.net/nginx/blog-with-pelican-and-nginx) gave
me what I needed as a starting point for the config file. Specifically it gave
me the location blocks I needed:
location = / {
# Instead of handling the index, just
# rewrite / to /index.html
rewrite ^ /index.html;
}
location / {
# Serve a .gz version if it exists
gzip_static on;
# Try to serve the clean url version first
try_files $uri.htm $uri.html $uri =404;
}
With that in hand I deployed my pelican site to the server
The first thing I noticed was that the URLs still had `index.php` in them.
This is a hold over from how my WordPress URL schemes were set up initially
that I never got around to fixing but it’s always something that’s bothered
me.
My blog may not be something that is linked to a ton (or at all?), but I
didn’t want to break any links if I didn’t have to, so I decided to
investigate Nginx rewrite rules.
I spent a bit of time trying to get my url to from this:
https://www.ryancheley.com/index.php/2017/10/01/migrating-from-square-space-to-word-press/
to this:
https://www.ryancheley.com/migrating-from-square-space-to-word-press/
using rewrite rules.
I gave up after several hours of trying different things. This did lead me to
some awesome settings for Pelican that would allow me to retain the legacy
Wordpress linking structure, so I updated the settings file to include this
line:
ARTICLE_URL = 'index.php/{date:%Y}/{date:%m}/{date:%d}/{slug}/'
ARTICLE_SAVE_AS = 'index.php/{date:%Y}/{date:%m}/{date:%d}/{slug}/index.html'
OK. I still have the `index.php` issue, but at least my links won’t break.
### 404 Not Found
I starting testing the links on the site just kind of clicking here and there
and discovered a couple of things:
1. The menu links didn’t always work
2. The 404 page wasn’t styled like I wanted it to me styled
The pelican documentation has an example for creating your own [404
pages](https://docs.getpelican.com/en/latest/tips.html?highlight=404#custom-404-pages)
which also includes what to update the Nginx config file location block.
And this is what lead me to discover what I had been doing wrong for the
rewrites earlier!
There are two location blocks in the example code I took, but I didn’t see how
they were different.
The first location block is:
location = / {
# Instead of handling the index, just
# rewrite / to /index.html
rewrite ^ /index.html;
}
Per the Nginx documentation the `=`
> > If an equal sign is used, this block will be considered a match if the
> request URI exactly matches the location given.
BUT since I was trying to use a regular expression, it wasn’t matching exactly
and so it wasn’t ‘working’
The second location block was not an exact match (notice there is no `=` in
the first line:
location / {
# Serve a .gz version if it exists
gzip_static on;
# Try to serve the clean url version first
try_files $uri.htm $uri.html $uri =404;
}
When I added the error page setting for Pelican I also added the URL rewrite
rules to remove the `index.php` and suddenly my dream of having the redirect
rules worked!
Additionally, I didn’t need the first location block at all. The final
location block looks like this:
location / {
# Serve a .gz version if it exists
gzip_static on;
# Try to serve the clean url version first
# try_files $uri.htm $uri.html $uri =404;
error_page 404 /404.html;
rewrite ^/index.php/(.*) /$1 permanent;
}
I was also able to update my Pelican settings to this:
ARTICLE_URL = '{date:%Y}/{date:%m}/{date:%d}/{slug}/'
ARTICLE_SAVE_AS = '{date:%Y}/{date:%m}/{date:%d}/{slug}/index.html'
Victory!
## What I hope to gain from moving
In my post outlining the move from SquareSpace to Wordpress I said,
> > As I wrote earlier my main reason for leaving Square Space was the
> difficulty I had getting content in. So, now that I’m on a WordPress site,
> what am I hoping to gain from it?
>>
>> 1. Easier to post my writing
>> 2. See Item 1
>>
>>
>> Writing is already really hard for me. I struggle with it and making it
difficult to get my stuff out into the world makes it that much harder. My
hope is that not only will I write more, but that my writing will get better
because I’m writing more.
So, what am I hoping to gain from this move:
1. Just as easy to write my posts
2. Easier to edit my posts
Writing is still hard for me (nearly 4 years later) and while moving to a new
shiny tool won’t make the thinking about writing any easier, maybe it will
make the process of writing a little more fun and that may lead to more words!
## Addendum
There are already a lot of words here and I have more to say on this. I plan
on writing a couple of more posts about the migration:
1. Setting up the server to host Pelican
2. The writing workflow used
",2021-07-02,migrating-to-pelican-from-wordpress,"## A little back story
In October of 2017 I [wrote about how I migrated from SquareSpace to
Wordpress](https://www.ryancheley.com/2017/10/01/migrating-from-square-space-
to-word-press/). After almost 4 years I’ve decided to migrate again, this time
to [Pelican](https://blog.getpelican.com). I did a bit of work with Pelican
during my [100 Days of Web Code](https://www.ryancheley.com/2019/08/31/my-
first-project-after-completing-the-100-days-of-web-in-python/) back in 2019 …
",Migrating to Pelican from Wordpress,https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-wordpress/
ryan,technology,"A few weeks back I decided to try and update my Python version with Homebrew.
I had already been through an issue where the an update like this was going to
cause an issue, but I also knew what the fix [was](/fixing-a-pycharm-issue-
when-updating-python-made-via-homebrew/ ""Homebrew and PyCharm don’t mix"").
With this knowledge in hand I happily performed the update. To my surprise, 2
things happened:
1. The update seemed to have me go from Python 3.7.6 to 3.7.3
2. When trying to reestablish my `Virtual Environment` two packages wouldn’t installed: `psycopg2` and `django-heroku`
Now, the update/backdate isn’t the end of the world. Quite honestly, next
weekend I’m going to just ditch homebrew and go with the standard download
from [Python.org](https://www.python.org ""Python"") because I’m hoping that
this non-sense won’t be an issue anymore
The second issue was a bit more irritating though. I spent several hours
trying to figure out what the problem was, only to find out, there wasn’t one
really.
The ‘fix’ to the issue was to
1. Open PyCharm
2. Go to Setting
3. Go to ‘Project Interpreter’
4. Click the ‘+’ to add a package
5. Look for the package that wouldn’t install
6. Click ‘Install Package’
7. Viola ... [mischief managed](https://www.hp-lexicon.org/magic/mischief-managed/)
The next time this happens I’m just buying a new computer
",2020-02-10,mischief-managed,"A few weeks back I decided to try and update my Python version with Homebrew.
I had already been through an issue where the an update like this was going to
cause an issue, but I also knew what the fix [was](/fixing-a-pycharm-issue-
when-updating-python-made-via-homebrew/ ""Homebrew and PyCharm don’t mix"").
With this knowledge in hand I happily performed …
",Mischief Managed,https://www.ryancheley.com/2020/02/10/mischief-managed/
ryan,technology,"In late April of this year I wrote a script that would capture the temperature
of the Raspberry Pi that sits above my Hummingbird feeder and log it to a
file.
It’s a straight forward enough script that captures the date, time and
temperature as given by the internal `measure_temp` function. In code it looks
like this:
MyDate=""`date +'%m/%d/%Y, %H:%M, '`""
MyTemp=""`/opt/vc/bin/vcgencmd measure_temp |tr -d ""=temp'C""`""
echo ""$MyDate$MyTemp"" >> /home/pi/Documents/python_projects/temperature/temp.log
I haven’t ever really done anything with the file, but one thing I wanted to
do was to get alerted if (when) the temperature exceeded the recommended level
of 70 C.
To do this I installed `ssmtp` onto my Pi using `apt-get`
sudo apt-get install ssmtp
With that installed I am able to send an email using the following command:
echo ""This is the email body"" | mail -s ""This is the subject"" user@domain.tld
With this tool in place I was able to attempt to send an alert if (when) the
Pi’s temperature got above 70 C (the maximum recommended running temp).
At first, I tried adding this code:
if [ ""$MyTemp"" -gt ""70"" ]; then
echo ""Camera Pi Running Hot"" | mail -s ""Warning! The Camera Pi is Running Hot!!!"" user@domain.tld
fi
Where the `$MyTemp` came from the above code that gets logged to the temp.log
file.
It didn’t work. The problem is that the temperature I’m capturing for logging
purposes is a float, while the item it was being compared to was an integer.
No problem, I’ll just make the “70” into a “70.0” and that will fix the ... oh
wait. That didn’t work either.
OK. I tried various combinations, trying to see what would work and finally
determined that there is a way to get the temperature as an integer, but it
meant using a different method to capture it. This is done by adding this
line:
ComparisonTemp=$(($(cat /sys/class/thermal/thermal_zone0/temp)/1000))
The code above gets the temperature as an integer. I then use that in my `if`
statement for checking the temperature:
if [ ""$ComparisonTemp"" -gt ""70"" ]; then
echo ""Camera Pi Running Hot"" | mail -s ""Warning! The Camera Pi is Running Hot!!!"" user@domain.tld
fi
Giving a final script that looks like this:
MyDate=""`date +'%m/%d/%Y, %H:%M, '`""
MyTemp=""`/opt/vc/bin/vcgencmd measure_temp |tr -d ""=temp'C""`""
echo ""$MyDate$MyTemp"" >> /home/pi/Documents/python_projects/temperature/temp.log
ComparisonTemp=$(($(cat /sys/class/thermal/thermal_zone0/temp)/1000))
if [ ""$ComparisonTemp"" -gt ""70"" ]; then
echo ""Camera Pi Running Hot"" | mail -s ""Warning! The Camera Pi is Running Hot!!!"" user@domain.tld
fi
",2018-12-04,monitoring-the-temperature-of-my-raspberry-pi-camera,"In late April of this year I wrote a script that would capture the temperature
of the Raspberry Pi that sits above my Hummingbird feeder and log it to a
file.
It’s a straight forward enough script that captures the date, time and
temperature as given by the internal …
",Monitoring the temperature of my Raspberry Pi Camera,https://www.ryancheley.com/2018/12/04/monitoring-the-temperature-of-my-raspberry-pi-camera/
ryan,technology,"Every once in a while I get a wild hair and decide that I need to ‘clean up’
my directories. This **never** ends well and I almost always mess up
something, but I still do it.
Why? I’m not sure, except that I _forget_ that I’ll screw it up. 🤦♂️
Anyway, on a Saturday morning when I had nothing but time I decided that I’d
move my PyCharm directory from /Users/ryan/PyCharm to
/Users/ryan/Documents/PyCharm for no other reason than **because**.
I proceeded to use the command line to move the folder
mv /Users/ryan/PyCharm/ /Users/ryan/Documents/PyCharm/
Nothing too big, right. Just a simple file movement.
Not so much. I then tried to open a project in PyCharm and it promptly freaked
out. Since I use virtual environments for my Python Project AND they tend to
have paths that reference where they exist, suddenly ALL of my virtual
environments were kind of just _gone_.
Whoops!
OK. No big deal. I just undid my move
mv /Users/ryan/Documents/PyCharm/ /Users/ryan/PyCharm
That should fix me up, right?
Well, mostly. I had to re-register the virtual environments and reinstall all
of the packages in my projects (mostly not a big deal with PyCharm) but holy
crap it was scary. I thought I had hosed my entire set of projects (not that I
have anything that’s critical … but still).
Anyway, this is mostly a note to myself.
> > The next time you get a wild hair to move stuff around, just keep it where
> it is. There’s no reason for it (unless there is).
But seriously, ask yourself first, “If I don’t move this what will happen?” If
the answer is anything less than “Something awful” go watch a baseball game,
or go to the pool, or write some code. Don’t mess with your environment unless
you really want to spend a couple of hours unmasking it up!
",2018-08-12,moving-my-pycharm-directory-or-how-i-spent-my-saturday-after-jacking-up-my-pycharm-environment,"Every once in a while I get a wild hair and decide that I need to ‘clean up’
my directories. This **never** ends well and I almost always mess up
something, but I still do it.
Why? I’m not sure, except that I _forget_ that I’ll screw it …
",Moving my Pycharm Directory or How I spent my Saturday after jacking up my PyCharm environment,https://www.ryancheley.com/2018/08/12/moving-my-pycharm-directory-or-how-i-spent-my-saturday-after-jacking-up-my-pycharm-environment/
ryan,technology,"As soon as I discovered the Talk Python to me Podcast, I discovered the Talk
Python to me courses. Through my job I have a basically free subscription to
PluralSight so I wasn’t sure that I needed to pay for the courses when I was
effectively getting courses in Python for free.
After taking a couple ( well, truth be told, all ) of the Python courses at
PluralSight, I decided, what the heck, the courses at Talk Python looked
interesting, Michael Kennedy has a good instructor’s voice and is genuinely
excited about Python, and if it didn’t work out, it didn’t work out.
I’m so glad I did, and I’m so glad I went through the 100 Days of Web in
Python course.
On May 2, 2019 I saw that the course had been released and I
[tweeted](https://mobile.twitter.com/ryancheley/status/1124127232262152192
""This!"")
> > This x 1000000! Thank you so much \@TalkPython. I can’t wait to get
> started!
I started on the course on May 4, 2019 and completed it August 11, 2019. Full
details on the course are
[here](https://training.talkpython.fm/courses/details/100-days-of-web-in-
python ""#100DaysOfWeb in Python"").
Of the 28 concepts that were reviewed over the course, my favorites things
were learning [Django](https://www.djangoproject.com ""Django Project"") and
[Django Rest Framework](https://www.django-rest-framework.org ""DRF"") and
[Pelican](https://blog.getpelican.com ""Pelican""). Holy crap, those parts were
just so much fun for me. Part of my interest in Django and DRF comes from
[William S Vincent’s books](https://wsvincent.com/books/ ""Will Vincent Books"")
and Podcast [Django Chat](https://djangochat.com ""Django Chat""), but having
actual videos to watch to get me through some of the things that have been
conceptually tougher for me was a godsend.
The other part that I really liked was actual deployment to a server. I had
tried (about 16 months ago) to deploy a Django app to Digital Ocean and it was
an unmitigated disaster. No static files no matter what I did. I eventually
gave up.
In this course I really learned how to deploy to both
[Heroku](https://www.heroku.com ""Heroku"") and a Linux box on [Digital
Ocean](https://www.digitalocean.com ""Digital Ocean""), and so now I feel much
more confident that the app I’m working on (more on that below) will actually
see the light of day on something other than a dev machine!
The one thing that I started to build (and am continuing to work on) is an app
with a DRF backend and a Vue.js front end that allows a user to track which
Baseball [stadia](https://www.writing-skills.com/is-it-stadia-or-stadiums ""I’m
going with the proper Latin pluralization because I’m fancy like that"")
they’ve been to. So far I have an API set up via DRF (hosted at Heroku) and
sketches of what to do in Vue.js. There's also a Django front end (but it’s
not the solution I really want to use).
Writing code for 100 days is hard. Like really hard. For nearly 20 of those
days I was on a family vacation in the Mid Western part of the US, but I made
time for both the coding, and my family. My family was super supportive of my
goal which was helpful, but the content in the course was really interesting
and challenging and made me want to do it every day, which was also super
helpful.
On day 85 I got a video from Bob that helped get me through the last 2 weeks.
It was encouraging, and helpful which is just what I needed. So thank you Bob.
At the end I also got a nice [congratulatory
video](https://www.bonjoro.com/g/Wveg23mstaE) from Julian, which was
surprising to say the least, especially because he called out some of the
things that I tweeted that I enjoyed about the class, addressed me by name,
and just genuinely made me feel good about my accomplishment!
OK. I just wrapped up the 100 Days of Code with Python and the web. Now what?
I took a week off to recuperate and am now ready to ‘get back to it’.
After all, I’ve got baseball stadia to track in my app!
# Talk Python to me Podcast
Why I like the Talk Python Podcast
When I started listening to it
Listening to the back catalog (nearly all of it)
",2019-08-18,my-experience-with-the-100-days-of-web-in-python,"As soon as I discovered the Talk Python to me Podcast, I discovered the Talk
Python to me courses. Through my job I have a basically free subscription to
PluralSight so I wasn’t sure that I needed to pay for the courses when I was
effectively getting courses in …
",My Experience with the 100 Days of Web in Python,https://www.ryancheley.com/2019/08/18/my-experience-with-the-100-days-of-web-in-python/
ryan,technology,"Last September the annual Django Con was held in San Diego. I **really**
wanted to go, but because of other projects and conferences for my job, I
wasn’t able to make it.
The next best thing to to watch the [videos from DjangoCon on
YouTube](https://www.youtube.com/playlist?list=PL2NFhrDSOxgXXUMIGOs8lNe2B-f4pXOX-).
I watched a couple of the videos, but one that really caught my attention was
by [Carlton Gibson](https://github.com/carltongibson) titled “[Your Web
Framework Needs You: An Update by Carlton
Gibson](https://www.youtube.com/watch?v=LjTRSH0pNBo)”.
I took what Carlton said to heart and thought, I really should be able to do
_something_ to help.
I went to the [Django Issues site](https://code.djangoproject.com/) and
searched for an **Easy Pickings** issue that involved documentation and found
[issue 31006 “Document how to escape a date/time format character for the
|date and |time filters.”](https://code.djangoproject.com/ticket/31006)
I read the [steps on what I needed to do to submit a pull
request](https://docs.djangoproject.com/en/dev/internals/contributing/writing-
code/working-with-git/#publishing-work), but since it was my first time
**ever** participating like this … I was a bit lost.
Luckily there isn’t anything that you can break, so I was able to wonder
around for a bit and get my bearings.
I forked the GitHub repo and I cloned it locally.
I then spent an **embarrassingly** long time trying to figure out where the
change was going to need to be made, and exactly what needed to change.
Finally, with my changes made, I [pushed my code
changes](https://github.com/django/django/pull/12128#issue-344767579) to
GitHub and waited.
Within a few hours [Mariusz Felisiak replied
back](https://github.com/django/django/pull/12128#issuecomment-557804299) and
asked about a suggestion he had made (but which I missed). I dug back into the
documentation, found what he was referring to, and made (what I thought) was
his suggested change.
Another push and a bit more waiting.
Mariusz Felisiak replied back with some input about the change I pushed up,
and I realized I had missed the mark on what he was suggesting.
OK. Third time’s a charm, right?
Turns out, in this case it was. [I pushed up one last
time](https://github.com/django/django/pull/12128#issuecomment-560278417) and
this time, my changes were
[merged](https://github.com/django/django/commit/cd7f48e85e3e4b9f13df6c0ef5f1d95abc079ff6#diff-7be9aaef6dad344e74188264c0e95daa)
into the master and just like that, I am now a contributor to Django (albeit a
very, very, very minor contributor).
Overall, this was a great experience, both with respect to learning about
contributing to an open source project, as well as learning about GitHub.
I’m hoping that with the holidays upon us I’ll be able to find the time to
pick up one or two (maybe even three) **Easy Pickings** issues from the Django
issue tracker.
",2019-12-07,my-first-commit-to-an-open-source-project-django,"Last September the annual Django Con was held in San Diego. I **really**
wanted to go, but because of other projects and conferences for my job, I
wasn’t able to make it.
The next best thing to to watch the [videos from DjangoCon on
YouTube](https://www.youtube.com/playlist?list=PL2NFhrDSOxgXXUMIGOs8lNe2B-f4pXOX-).
I watched a couple …
",My first commit to an Open Source Project: Django,https://www.ryancheley.com/2019/12/07/my-first-commit-to-an-open-source-project-django/
ryan,technology,"I've been writing code for about 15 years (on and off) and Python for about 4
or 5 years. With Python it's mostly small scripts and such. I’ve never
considered myself a ‘real programmer’ (Python or otherwise).
About a year ago, I decided to change that (for Python at the very least) when
I set out to do [100 Days Of Web in
Python](https://training.talkpython.fm/courses/details/100-days-of-web-in-
python) from [Talk Python To Me](https://talkpython.fm/home). Part of that
course were two sections taught by [Bob](https://pybit.es/author/bob.html)
regarding [Django](https://www.djangoproject.com). I had tried learn
[Flask](https://flask.palletsprojects.com/en/1.1.x/) before and found it ...
overwhelming to say the least.
Sure, you could get a ‘hello world’ app in 5 lines of code, but then what? If
you wanted to do just about anything it required ‘something’ else.
I had tried Django before, but wasn't able to get over the 'hump' of
deploying. Watching the Django section in the course made it just click for
me. Finally, a tool to help me make AND deploy something! But what?
## The Django App I wanted to create
A small project I had done previously was to write a short
[script](https://github.com/ryancheley/itfdb) for my Raspberry Pi to tell me
when LA Dodger (Baseball) games were on (it also has beloved Dodger Announcer
[Vin Scully](https://en.wikipedia.org/wiki/Vin_Scully) say his catch phrase,
“It’s time for Dodger baseball!!!”).
I love the Dodgers. But I also love baseball. I love baseball so much I have
on my bucket list a trip to visit all 30 MLB stadia. Given my love of
baseball, and my new found fondness of Django, I thought I could write
something to keep track of visited stadia. I mean, how hard could it _really_
be?
## What does it do?
My Django Site uses the [MLB API](https://statsapi.mlb.com) to search for
games and allows a user to indicate a game seen in person. This allows them to
track which stadia you've been to. My site is composed of 4 apps:
* Users
* Content
* API
* Stadium Tracker
The API is written using [Django Rest Framework (DRF)](https://www.django-
rest-framework.org) and is super simple to implement. It’s also [really easy
to changes to your models if you need to](/updating-the-models-for-my-django-
rest-framework-api/).
The Users app was inspired by [Will S Vincent](https://wsvincent.com) ( a
member of the [Django Software
Foundation](https://www.djangoproject.com/foundation/), author, and
[podcaster](https://djangochat.com)). He (and others) recommend creating a
custom user model to more easily extend the User model later on. Almost all of
what’s in my Users App is directly taken from his recommendations.
The Content App was created to allow me to update the [home
page](https://stadium-tracker-api.herokuapp.com), and [about
page](https://stadium-tracker-api.herokuapp.com/Pages/About) (and any other
content based page) using the database instead of updating html in a template.
The last App, and the reason for the site itself, is the Stadium Tracker! I
created a search tool that allows a user to find a game on a specific day
between two teams. Once found, the user can add that game to ‘Games Seen’.
This will then update the list of games seen for that user AND mark the
location of the game as a stadium visited. The best part is that because the
game is from the MLB API I can do some interesting things:
1. I can get the actual stadium from visited which allows the user to indicate historic (i.e. retired) stadia
2. I can get details of the game (final score, hits, runs, errors, stories from MLB, etc) and display them on a details page.
That's great and all, but what does it look like?
### The Search Tool

### Stadia Listing
#### National League West

#### American League West

## What’s next?
I had created a roadmap at one point and was able to get through some (but not
all) of those items. Items left to do:
* Get Test coverage to at least 80% across the app (currently sits at 70%)
* Allow users to be based on social networks (right now I’m looking at Twitter, and Instagram) probably with the [Django Allauth Package](https://django-allauth.readthedocs.io/en/latest/installation.html)
* Add ability to for minor league team search and stadium tracking (this is already part of the MLB API, I just never implemented it)
* Allow user to search for range of dates for teams
* Update the theme ... it’s the default MUI CSS which is nice, but I’d rather it was something a little bit different
* Convert Swagger implementation from `django-rest-swagger` to `drf-yasg`
## Final Thoughts
Writing this app did several things for me.
First, it removed some of the tutorial paralysis that I felt. Until I wrote
this I didn’t think I was a web programmer (and I still don’t really), and
therefore had no business writing a web app.
Second, it taught me how to use git more effectively. This directly lead to me
[contributing to Django itself](/my-first-commit-to-an-open-source-project-
django.html) (in a very small way via updates to documentation). It also
allowed me to feel comfortable enough to write my first post on [this very
blog](https://pybit.es/using-python-to-check-for-file-changes-in-excel.html).
Finally, it introduced me to the wonderful ecosystem around Django. There is
so much to learn, but the great thing is that EVERYONE is learning something.
There isn’t anyone that knows it all which makes it easier to ask questions!
And helps me in feeling more confident to answer questions when asked.
The site is deployed on [Heroku](https://www.heroku.com) and can be seen
[here](https://stadium-tracker-api.herokuapp.com). The code for the site can
be seen [here](https://github.com/ryancheley/StadiumTrackerAPIPublic).
This article was also posted on the [PyBit.es Blog](https://pybit.es/my-first-
django-app.html)
",2020-05-02,my-first-django-project,"I've been writing code for about 15 years (on and off) and Python for about 4
or 5 years. With Python it's mostly small scripts and such. I’ve never
considered myself a ‘real programmer’ (Python or otherwise).
About a year ago, I decided to change that (for Python at …
",My First Django Project,https://www.ryancheley.com/2020/05/02/my-first-django-project/
ryan,technology,"As I mentioned in my last post, after completing the 100 Days of Web in Python
I was moving forward with a Django app I wrote.
I pushed up my first version to Heroku on August 24. At that point it would
allow users to add a game that they had seen, but when it disaplyed the games
it would show a number (the game’s ID) instead of anything useful.
A few nights ago (Aug 28) I committed a version which allows the user to see
which game they add, i.e. there are actual human readable details versus just
a number!
The page can be found [here](https://www.stadiatracker.com). It feels really
good to have it up in a place where people can actually see it. That being
said I discovered a a couple of things on the publish that I’d like to fix.
I have a method that returns details about the game. One problem is that if
any of the elements return `None` then the front page returns a Server 500
error ... this is not good.
It took a bit of googling to see what the issue was. The way I found the
answer was to see an idea to turn Debug to True on my ‘prod’ server and see
the output. That helped me identify the issue.
To ‘fix’ it in the short term I just deleted all of the data for the games
seen in the database.
I’m glad that it happened because it taught me some stuff that I knew I needed
to do, but maybe didn’t pay enough attention to ... like writing unit tests.
Based on that experience I wrote out a roadmap of sorts for the updates I want
to get into the app:
* Tests for all classes and methods
* Ability to add minor league games
* Create a Stadium Listing View
* More robust search tool that allows a single team to be selected
* Logged in user view for only their games
* Create a List View of games logged per stadium
* Create a List View of attendees (i.e. users) at games logged
* Add more user features:
* Ability to add a picture
* Ability to add Twitter handle
* Ability to add Instagram handle
* Ability to add game notes
* Create a Heroku Pipeline to ensure that pushes to PROD are done through a UAT site
* Create a blog (as a pelican standalone sub domain)
It’s a lot of things but I’ve already done some things that I wanted to:
* Added SSL
* Set up to go to actual domain instead of Heroku subdomain
I’ll write up how I did the set up for the site so I can do it again. It’s not
well documented when your registrar is Hover and you’ve got your site on
Heroku. Man ... it was an tough.
",2019-08-31,my-first-project-after-completing-the-100-days-of-web-in-python,"As I mentioned in my last post, after completing the 100 Days of Web in Python
I was moving forward with a Django app I wrote.
I pushed up my first version to Heroku on August 24. At that point it would
allow users to add a game that they …
",My first project after completing the 100 Days of Web in Python,https://www.ryancheley.com/2019/08/31/my-first-project-after-completing-the-100-days-of-web-in-python/
ryan,technology,"A few months ago I was inspired by [Simon Willison](https://simonwillison.net
""Simon, creator of Datasette"") and his project
[Datasette](https://datasette.io ""Datasette - An awesome tool for data
exploration and publishing"") and it’s related ecosystem to write a Python
Package for it.
I use [toggl](https://toggl.com ""Toggl - a time tracking tool"") to track my
time at work and I thought this would be a great opportunity use that data
with [Datasette](https://datasette.io ""Datasette - An awesome tool for data
exploration and publishing"") and see if I couldn’t answer some interesting
questions, or at the very least, do some neat data discovery.
The purpose of this package is to:
> Create a SQLite database containing data from your [toggl](https://toggl.com
> ""Toggl - a time tracking tool"") account
I followed the [tutorial for committing a package to
PyPi](https://packaging.python.org/tutorials/packaging-projects/ ""How do I add
a package to PyPi?"") and did the first few pushes manually. Then, using a
GitHub action from one of Simon’s [Datasette](https://datasette.io ""Datasette
- An awesome tool for data exploration and publishing"") projects, I was able
to automate it when I make a release on GitHub!
Since the initial commit on March 7 (my birthday BTW) I’ve had 10 releases,
with the most recent one coming yesterday which removed an issue with one of
the tables reporting back an API key which, if published on the internet could
be a bad thing ... so hooray for security enhancements!
Anyway, it was a fun project, and got me more interested in authoring Python
packages. I’m hoping to do a few more related to
[Datasette](https://datasette.io) (although I’m not sure what to write
honestly!).
Be sure to check out the package on [PyPi.org](https://pypi.org/project/toggl-
to-sqlite/ ""toggl-to-SQLite"") and the source code on
[GitHub](https://github.com/ryancheley/toggl-to-sqlite/ ""GitHub repo of toggl-
to-sqlite"").
",2021-06-06,my-first-python-package,"A few months ago I was inspired by [Simon Willison](https://simonwillison.net
""Simon, creator of Datasette"") and his project
[Datasette](https://datasette.io ""Datasette - An awesome tool for data
exploration and publishing"") and it’s related ecosystem to write a Python
Package for it.
I use [toggl](https://toggl.com ""Toggl - a time tracking tool"") to track my
time at work and I thought this would be a great opportunity use that data
with [Datasette](https://datasette.io ""Datasette - An awesome tool for data
exploration and publishing"") and …
",My First Python Package,https://www.ryancheley.com/2021/06/06/my-first-python-package/
ryan,technology,"For Christmas I bought myself a 2017 13-inch MacBook Pro with Touch Bar.
Several bonuses were associated with the purchase:
1. A \$150 Apple Gift Card because I bought the MacBook Pro on Black Friday and Apple had a special going (w00t!)
2. The Credit Card I use to make **ALL** of my purchases at Apple has a 3% cash back (in the form of iTunes cards)
3. A free 30 minute online / phone session with an ‘Apple Specialist’
Now I didn’t know about item number 3 when I made the purchase, but was
greeted with an email informing me of my great luck.
This is my fifth Mac1 and I don’t remember ever getting this kind of service
before. So I figured, what the hell and decided to snooze the email until the
day after Christmas to remind myself to sign up for the session.
When I entered the session I was asked to optionally provide some information
about myself. I indicated that I had been using a Mac for several years and
considered myself an intermediate user.
My Apple ‘Specialist’ was
_[Jaime](http://gameofthrones.wikia.com/wiki/Jaime_Lannister ""No ... not that
one"")_. She confirmed the optional notes that I entered and we were off to the
races.
Now a lot of what she told me about Safari (blocking creepy tracking behavior,
ability to mute sound from auto play videos, default site to display in reader
view) I knew from the [WWDC
Keynote](https://developer.apple.com/videos/play/wwdc2017/101/ ""WWDC Keynote"")
that I watched back in June, but I listened just in case I had missed
something from that session (or the [10s / 100s of hours of
podcasts](https://relay.fm ""All the Great Shows!"") I listened to about the
Keynote).
One thing that I had heard about was the ability to _pin_ tabs in Safari. I
never really knew what that meant and figured it wasn’t anything that I
needed.
I was wrong. Holy crap is [pinning tabs in
Safari](https://www.youtube.com/watch?v=k-ssw5MKAno ""Pinning Tabs!"") a useful
feature! I can keep all of my most used sites pinned and get to them really
quickly and they get auto refreshed! Sweet!
The other super useful thing I found out about was the [Split
Screen](https://support.apple.com/en-us/HT204948 ""Split your screen ...
increase your productivity"") feature that allows you to split apps on your
screen (in a very iOS-y way!).
Finally, Jaime reviewed how to customize the touch bar! This one was super
useful as I think there are 2 discoverability issues with it:
1. The option to `Customize Touch Bar` is hidden in the `View` menu which isn’t somewhere I’d look for it
2. To [Customize the Touch Bar](https://support.apple.com/en-us/HT207055 ""Customization!"") you drag down from the Main Screen onto the Touch Bar.
After the call I received a nice follow up email from Apple / Jaime
> Now that you're more familiar with your new Mac, here are some additional
> resources that can help you go further.
>
> Apple Support Find answers to common questions, watch video tutorials,
> download user guides, and share solutions with the Apple community. [Visit
> Support](https://support.apple.com/mac)
>
> Today at Apple Discover inspiring programs happening near you. [Visit Today
> at Apple](https://www.apple.com/today/)
>
> Accessories From the Apple accessories page, you can learn about all kinds
> of new and innovative products that work with iPhone, iPad, Mac and more.
> [Visit Accessories](https://www.apple.com/shop/accessories/all-accessories)
>
> How to use the Touch Bar on your MacBook Pro -
> [https://support.apple.com/en- us/HT207055](https://support.apple.com/en-
> us/HT207055)
>
> Use Mission Control on your Mac - [https://support.apple.com/en-
> us/HT204100](https://support.apple.com/en-us/HT204100)
>
> Use two Mac apps side by side in Split View - [https://support.apple.com/en-
> us/HT204948](https://support.apple.com/en-us/HT204948)
>
> Websites preferences - [https://support.apple.com/ guide/safari/websites-
> preferences-ibrwe2159f50](https://support.apple.com/guide/safari/websites-
> preferences-ibrwe2159f50)
I’m glad that I had the Mac session and I will encourage anyone that buys a
Mac in the future to schedule one.
1. They are in order of purchase: 2012 15-inch MacBook Pro, 2014 27-inch 5K iMac, 2015 MacBook, 2016 13-inch 2 Thunderbolt MacBook Pro; 2017 13-inch MacBook Pro with Touch Bar ↩︎
",2017-12-27,my-mac-session-with-apple,"For Christmas I bought myself a 2017 13-inch MacBook Pro with Touch Bar.
Several bonuses were associated with the purchase:
1. A \$150 Apple Gift Card because I bought the MacBook Pro on Black Friday and Apple had a special going (w00t!)
2. The Credit Card I use to make **ALL** of …
",My Mac session with Apple,https://www.ryancheley.com/2017/12/27/my-mac-session-with-apple/
ryan,technology,"I’d discovered a python package called `osmnx` which will take GIS data and
allow you to draw maps using python. Pretty cool, but I wasn’t sure what I was
going to do with it.
After a bit of playing around with it I finally decided that I could make some
pretty cool [Fractures](https://www.fractureme.com ""Fracture"").
I’ve got lots of Fracture images in my house and I even turned my diplomas
into Fractures to hang up on the wall at my office, but I hadn’t tried to make
anything like this before.
I needed to figure out what locations I was going to do. I decided that I
wanted to do 9 of them so that I could create a 3 x 3 grid of these maps.
I selected 9 cities that were important to me and my family for various
reasons.
Next writing the code. The script is 54 lines of code and doesn’t really
adhere to PEP8 but that just gives me a chance to do some reformatting /
refactoring later on.
In order to get the desired output I needed several libraries:
osmnx (as I’d mentioned before)
matplotlib.pyplot
numpy
PIL
If you’ve never used PIL before it’s the ‘Python Image Library’ and according
to it’s [home page](http://www.pythonware.com/products/pil/ ""Python Image
Library Home Page"") it
> adds image processing capabilities to your Python interpreter. This library
> supports many file formats, and provides powerful image processing and
> graphics capabilities.
OK, let’s import some libraries!
import osmnx as ox, geopandas as gpd, os
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
Next, we establish the configurations:
ox.config(log_file=True, log_console=False, use_cache=True)
The `ox.config` allows you to specify several options. In this case, I’m:
1. Specifying that the logs be saved to a file in the log directory
2. Suppress the output of the log file to the console (this is helpful to have set to `True` when you’re first running the script to see what, if any, errors you have.
3. The `use_chache=True` will use a local cache to save/retrieve http responses instead of calling API repetitively for the same request URL
This option will help performance if you have the run the script more than
once.
OSMX has many different options to generate maps. I played around with the
options and found that the walking network within 750 meters of my address
gave me the most interesting lines.
AddressDistance = 750
AddressDistanceType = 'network'
AddressNetworkType = 'walk'
Now comes some of the most important decisions (and code!). Since I’ll be
making this into a printed image I want to make sure that the image and
resulting file will be of a high enough quality to render good results. I also
want to start with a white background (although a black background might have
been kind of cool). I also want to have a high DPI. Taking these needs into
consideration I set my plot variables:
PlotBackgroundColor = '#ffffff'
PlotNodeSize = 0
PlotFigureHeight = 40
PlotFigureWidth = 40
PlotFileType = 'png'
PlotDPI = 300
PlotEdgeLineWidth = 10.0
I played with the `PlotEdgeLineWidth` a bit until I got a result that I liked.
It controls how thick the route lines are and is influenced by the `PlotDPI`.
For the look I was going for 10.0 worked out well for me. I’m not sure if that
means a 30:1 ratio for `PlotDPI` to `PlotEdgeLineWidth` would be universal but
if you like what you see then it’s a good place to start.
One final piece was deciding on the landmarks that I was going to use. I
picked nine places that my family and I had been to together and used
addresses that were either of the places that we stayed at (usually hotels) OR
major landmarks in the areas that we stayed. Nothing special here, just a text
file with one location per line set up as
> Address, City, State
For example:
> 1234 Main Street, Anytown, CA
So we just read that file into memory:
landmarks = open('/Users/Ryan/Dropbox/Ryan/Python/landmarks.txt', 'r')
Next we set up some scaffolding so we can loop through the data effectively
landmarks = landmarks.readlines()
landmarks = [item.rstrip() for item in landmarks]
fill = (0,0,0,0)
city = []
The loop below is doing a couple of things:
1. Splits the landmarks array into base elements by breaking it apart at the commas (I can do this because of the way that the addresses were entered. Changes may be needed to account for my complex addresses (i.e. those with multiple address lines (suite numbers, etc) or if local addresses aren’t constructed in the same way that US addresses are)
2. Appends the second and third elements of the `parts` array and replaces the space between them with an underscore to convert `Anytown, CA` to `Anytown_CA`
for element in landmarks:
parts = element.split(',')
city.append(parts[1].replace(' ', '', 1)+'_'+parts[2].replace(' ', ''))
This next line isn’t strictly necessary as it could just live in the loop, but
it was helpful for me when writing to see what was going on. We want to know
how many items are in the `landmarks`
rows = len(landmarks)
Now to loop through it. A couple of things of note:
The package includes several `graph_from_...` functions. They take as input
some type, like address, i.e. `graph_from_address` (which is what I’m using)
and have several keyword arguments.
In the code below I’m using the ith landmarks item and setting the `distance`,
`distance_type`, `network_type` and specifying an option to make the map
simple by setting `simplify=‘true’`
To add some visual interest to the map I’m using this line
ec = ['#cc0000' if data['length'] >=100 else '#3366cc' for u, v, key, data in G.edges(keys=True, data=True)]
If the length of the part of the map is longer than 100m then the color is
displayed as `#cc0000` (red) otherwise it will be `#3366cc` (blue)
The `plot_graph` is what does the heavy lifting to generate the image. It
takes as input the output from the `graph_from_address` and `ec` to identify
what and how the map will look.
Next we use the `PIL` library to add text to the image. It takes into memory
the image file and saves out to a directory called `/images/`. My favorite
part of this library is that I can choose what font I want to use (whether
it’s part of the system fonts or a custom user font) and the size of the font.
For my project I used San Francisco at 512.
Finally, there is an exception for the code that adds text. The reason for
this is that when I was playing with adding text to the image I found that for
8 of 9 maps having the text in the upper left hand corner worker really well.
It was just that last one (San Luis Obispo, CA) that didn’t.
So, instead of trying to find a different landmark, I decided to take a bit of
artistic license and put the San Luis Obispo text in the upper right hard
corner.
Once the script is all set simply typing `python MapProject.py` in my terminal
window from the directory where the file is saved generated the files.
All I had to do what wait and the images were saved to my `/images/`
directory.
Next, upload to Fracture and order the glass images!
I received the images and was super excited. However, upon opening the box and
looking at them I noticed something wasn’t quite right
[caption id=""attachment_188"" align=""alignnone"" width=""2376""]![Napa with the
text slightly off the
image]images/uploads/2018/01/Image-12-16-17-6-55-AM.jpeg){.alignnone .size-
full .wp-image-188 width=""2376"" height=""2327""} Napa with the text slightly off
the image[/caption]
As you can see, the name place is cut off on the left. Bummer.
No reason to fret though! Fracture has a 100% satisfaction guarantee. So I
emailed support and explained the situation.
Within a couple of days I had my bright and shiny fractures to hang on my wall
[caption id=""attachment_187"" align=""alignnone"" width=""2138""]![Napa with the
text properly displaying]images/uploads/2018/01/IMG_9079.jpg){.alignnone
.size-full .wp-image-187 width=""2138"" height=""2138""} Napa with the text
properly displaying[/caption]
So that my office wall is no longer blank and boring:

but interesting and fun to look at

",2018-01-12,my-map-art-project,"I’d discovered a python package called `osmnx` which will take GIS data and
allow you to draw maps using python. Pretty cool, but I wasn’t sure what I was
going to do with it.
After a bit of playing around with it I finally decided that I could …
",My Map Art Project,https://www.ryancheley.com/2018/01/12/my-map-art-project/
ryan,technology,"I've been interested in python as a tool for a while and today I had the
chance to try and see what I could do.
With my 12.9 iPad Pro set up at my desk, I started out. I have [Ole Zorn's
Pythonista 3](http://omz-software.com/pythonista/) installed so I started on
my first script.
My first task was to scrape something from a website. I tried to start with a
website listing doctors, but for some reason the html rendered didn't include
anything useful.
So the next best thing was to find a website with staff listed on it. I used
my dad's company and his [staff listing](http://www.graphtek.com/Our-Team) as
a starting point.
I started with a quick Google search to find Pythonista Web Scrapping and came
across [this](https://forum.omz-software.com/topic/1513/screen-scraping) post
on the Pythonista forums.
That got me this much of my script:
import bs4, requests
myurl = 'http://www.graphtek.com/Our-Team'
def get_beautiful_soup(url):
return bs4.BeautifulSoup(requests.get(url).text, ""html5lib"")
soup = get_beautiful_soup(myurl)
Next, I needed to see how to start traversing the html to get the elements
that I needed. I recalled something I read a while ago and was (luckily) able
to find some [help](https://first-web-scraper.readthedocs.io/en/latest/).
That got me this:
`tablemgmt = soup.findAll('div', attrs={'id':'our-team'})`
This was close, but it would only return 2 of the 3 `div` tags I cared about
(the management team has a different id for some reason ... )
I did a search for regular expressions and Python and found this useful
[stackoverflow](http://stackoverflow.com/questions/24748445/beautiful-soup-
using-regex-to-find-tags) question and saw that if I updated my imports to
include `re` then I could use regular expressions.
Great, update the imports section to this:
`import bs4, requests, re`
And added `re.compile` to my `findAll` to get this:
`tablemgmt = soup.findAll('div', attrs={'id':re.compile('our-team')})`
Now I had all 3 of the `div` tags I cared about.
Of course the next thing I wanted to do was get the information i cared out of
the structure `tablemgmt`.
When I printed out the results I noticed leading and trailing square brackets
and eveytime I tried to do something I'd get an error.
It took an embarrassingly long time to realize that I needed to treat
`tablemgmt` as an array. Whoops!
Once I got through that it was straight forward to loop through the data and
output it:
list_of_names = []
for i in tablemgmt:
for row in i.findAll('span', attrs={'class':'team-name'}):
text = row.text.replace('0:
list_of_names.append(text)
list_of_titles = []
for i in tablemgmt:
for row in i.findAll('span', attrs={'class':'team-title'}):
text = row.text.replace('0:
list_of_titles.append(text)
The last bit I wanted to do was to add some headers **and** make the lists
into a two column multimarkdown table.
OK, first I needed to see how to 'combine' the lists into a multidimensional
array. Another google search and ... success. Of course the answer would be on
[stackoverflow](http://stackoverflow.com/questions/12040989/printing-all-the-
values-from-multiple-lists-at-the-same-time)
With my knowldge of looping through arrays and the function `zip` I was able
to get this:
for j, k in zip(list_of_names, list_of_titles):
print('|'+ j + '|' + k + '|')
Which would output this:
|Mike Cheley|CEO/Creative Director|
|Ozzy|Official Greeter|
|Jay Sant|Vice President|
|Shawn Isaac|Vice President|
|Jason Gurzi|SEM Specialist|
|Yvonne Valles|Director of First Impressions|
|Ed Lowell|Senior Designer|
|Paul Hasas|User Interface Designer|
|Alan Schmidt|Senior Web Developer|
This is close, however, it still needs headers.
No problem, just add some static lines to print out:
print('| Name | Title |')
print('| --- | --- |')
And voila, we have a multimarkdown table that was scrapped from a web page:
| Name | Title |
| --- | --- |
|Mike Cheley|CEO/Creative Director|
|Ozzy|Official Greeter|
|Jay Sant|Vice President|
|Shawn Isaac|Vice President|
|Jason Gurzi|SEM Specialist|
|Yvonne Valles|Director of First Impressions|
|Ed Lowell|Senior Designer|
|Paul Hasas|User Interface Designer|
|Alan Schmidt|Senior Web Developer|
Which will render to this:
Name Title
* * *
Mike Cheley CEO/Creative Director Ozzy Official Greeter Jay Sant Vice
President Shawn Isaac Vice President Jason Gurzi SEM Specialist Yvonne Valles
Director of First Impressions Ed Lowell Senior Designer Paul Hasas User
Interface Designer Alan Schmidt Senior Web Developer
",2016-10-15,my–first–python-script-that-does-something,"I've been interested in python as a tool for a while and today I had the
chance to try and see what I could do.
With my 12.9 iPad Pro set up at my desk, I started out. I have [Ole Zorn's
Pythonista 3](http://omz-software.com/pythonista/) installed so I started on …
",My First Python Script that does 'something',https://www.ryancheley.com/2016/10/15/my–first–python-script-that-does-something/
ryan,technology,"New Watch
## The first week
I've been rocking a series 2 Apple Watch for about 18 months. I timed my
purchase just right to not get a series 3 when it went on sale (🤦🏻♂️). When
the series 4 was released I decided that I wanted to get one, but was a bit
too slow (and tired) to stay up and order one at launch.
This meant that I didn't get my new Apple Watch until last Saturday (nearly5
weeks later). I wanted to write down my thoughts on the Watch and what it's
meant for me. I won't go into specs and details, just what I've found that I
liked and didn't like.
## The Good
Holy crap is it fast. I mean, like really fast. I've never had a watch that
responded like this (before my series 2 I had a series 0).
It reacts when I want it to, so much so that I'm sometimes not prepared. It
reminds me of the transition from Touch ID Gen 1 to Touch ID Gen 2. I really
appreciate how fast everything comes up. When I start an activity, it’s there
(no more waiting like on Series 2). When I want to pair with my AirPods … it’s
there and ready to go.
I also really like how much thinner it is and the increase in size. At first I
thought it was ‘monstrous’ but now I’m trying to figure out how I ever lived
with 2 fewer millimeters.
I also decided to get the Cellular Version just in case. It was a bit more
expensive, and I probably won’t end up using it past the free trial I got, but
it’s nice to know that I can have it if I need it. I haven’t had a chance to
use it (yet) but hopefully I’ll get a chance here soon.
## The Bad
So far, nothing has stuck me as being ‘bad’. It’s the first Apple Watch I’ve
had that’s really exceeded my expectations in terms of performance and sheer
joy that I get out of using it.
## Conclusion
Overall I **love** the Series 4 Watch. It doesn’t do anything different than
the Series 2 that I had (except I can make phone calls without my phone if I
need to) but _oh my_ is it fast! If someone is on a Series 2 and is wondering
if jumping to the Series 4 is worth it … it totally is.
",2018-11-03,new-apple-watch,"New Watch
## The first week
I've been rocking a series 2 Apple Watch for about 18 months. I timed my
purchase just right to not get a series 3 when it went on sale (🤦🏻♂️). When
the series 4 was released I decided that I wanted to get one, but was …
",New Watch,https://www.ryancheley.com/2018/11/03/new-apple-watch/
ryan,technology,"I'm an avid [Twitter](https://www.twitter.com) user, mostly as a replacement
[RSS](https://en.wikipedia.org/wiki/RSS) feeder, but also because I can't
stand [Facebook](https://www.facebook.com) and this allows me to learn about
really important world events when I need to and to just stay isolated with
[my head in the
sand](http://gerdleonhard.typepad.com/.a/6a00d8341c59be53ef013488b614d8970c-800wi)
when I don't. It's perfect for me.
One of the people I follow on [Twitter](https://twitter.com/drdrang) is [Dr.
Drang](http://www.leancrew.com/all-this/) who is an Engineer of some kind by
training. He also appears to be a fan of baseball and posted an [analysis of
Jake Arrieata's pitching](http://leancrew.com/all-this/2016/09/jake-arrieta-
and-python/) over the course of the 2016 MLB season (through September 22 at
least).
When I first read it I hadn't done too much with Python, and while I found the
results interesting, I wasn't sure what any of the code was doing (not really
anyway).
Since I had just spent the last couple of days learning more about
`BeautifulSoup` specifically and `Python` in general I thought I'd try to do
two things:
1. Update the data used by Dr. Drang
2. Try to generalize it for any pitcher
Dr. Drang uses a flat csv file for his analysis and I wanted to use
`BeautifulSoup` to scrape the data from [ESPN](https://www.espn.com) directly.
OK, I know how to do that (sort of ¯\ _(ツ)_ /¯)
First things first, import your libraries:
import pandas as pd
from functools import partial
import requests
import re
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
from datetime import datetime, date
from time import strptime
The next two lines I ~~stole~~ borrowed directly from Dr. Drang's post. The
first line is to force the plot output to be inline with the code entered in
the terminal. The second he explains as such:
> > The odd ones are the `rcParams` call, which makes the inline graphs bigger
> than the tiny Jupyter default, and the functools import, which will help us
> create ERAs over small portions of the season.
I'm not using [Jupyter](http://jupyter.org) I'm using
[Rodeo](http://rodeo.yhat.com) as my IDE but I kept them all the same:
%matplotlib inline
plt.rcParams['figure.figsize'] = (12,9)
In the next section I use `BeautifulSoup` to scrape the data I want from
[ESPN](https://www.espn.com):
url = 'http://www.espn.com/mlb/player/gamelog/_/id/30145/jake-arrieta'
r = requests.get(url)
year = 2016
date_pitched = []
full_ip = []
part_ip = []
earned_runs = []
tables = BeautifulSoup(r.text, 'lxml').find_all('table', class_='tablehead mod-player-stats')
for table in tables:
for row in table.find_all('tr'): # Remove header
columns = row.find_all('td')
try:
if re.match('[a-zA-Z]{3}\s', columns[0].text) is not None:
date_pitched.append(
date(
year
, strptime(columns[0].text.split(' ')[0], '%b').tm_mon
, int(columns[0].text.split(' ')[1])
)
)
full_ip.append(str(columns[3].text).split('.')[0])
part_ip.append(str(columns[3].text).split('.')[1])
earned_runs.append(columns[6].text)
except Exception as e:
pass
This is basically a rehash of what I did for my Passer scraping
([here](https://www.ryancheley.com/blog/2016/11/17/web-scrapping),
[here](https://www.ryancheley.com/blog/2016/11/18/web-scrapping-passer-data-
part-ii), and [here](https://www.ryancheley.com/blog/2016/11/19/web-scrapping-
passer-data-part-iii)).
This proved a useful starting point, but unlike the NFL data on ESPN which has
pre- and regular season breaks, the MLB data on ESPN has monthly breaks, like
this:
Regular Season Games through October 2, 2016
DATE
Oct 1
Monthly Totals
DATE
Sep 24
Sep 19
Sep 14
Sep 9
Monthly Totals
DATE
Jun 26
Jun 20
Jun 15
Jun 10
Jun 4
Monthly Totals
DATE
May 29
May 23
May 17
May 12
May 7
May 1
Monthly Totals
DATE
Apr 26
Apr 21
Apr 15
Apr 9
Apr 4
Monthly Totals
However, all I wanted was the lines that correspond to `columns[0].text` with
actual dates like 'Apr 21'.
In reviewing how the dates were being displayed it was basically '%b %D', i.e.
May 12, Jun 4, etc. This is great because it means I want 3 letters and then a
space and nothing else. Turns out, Regular Expressions are great for stuff
like this!
After a bit of [Googling](https://www.google.com) I got what I was looking
for:
re.match('[a-zA-Z]{3}\s', columns[0].text)
To get my regular expression and then just add an `if` in front and call it
good!
The only issue was that as I ran it in testing, I kept getting no return data.
What I didn't realize is that returns a `NoneType` when it's false. Enter more
Googling and I see that in order for the `if` to work I have to add the `is
not None` which leads to results that I wanted:
Oct 22
Oct 16
Oct 13
Oct 11
Oct 7
Oct 1
Sep 24
Sep 19
Sep 14
Sep 9
Jun 26
Jun 20
Jun 15
Jun 10
Jun 4
May 29
May 23
May 17
May 12
May 7
May 1
Apr 26
Apr 21
Apr 15
Apr 9
Apr 4
The next part of the transformation is to convert to a date so I can sort on
it (and display it properly) later.
With all of the data I need, I put the columns into a `Dictionary`:
dic = {'date': date_pitched, 'Full_IP': full_ip, 'Partial_IP': part_ip, 'ER': earned_runs}
and then into a `DataFrame`:
games = pd.DataFrame(dic)
and apply some manipulations to the `DataFrame`:
games = games.sort_values(['date'], ascending=[True])
games[['Full_IP','Partial_IP', 'ER']] = games[['Full_IP','Partial_IP', 'ER']].apply(pd.to_numeric)
Now to apply some Baseball math to get the Earned Run Average:
games['IP'] = games.Full_IP + games.Partial_IP/3
games['GERA'] = games.ER/games.IP*9
games['CIP'] = games.IP.cumsum()
games['CER'] = games.ER.cumsum()
games['ERA'] = games.CER/games.CIP*9
In the next part of Dr. Drang's post he writes a custom function to help
create moving averages. It looks like this:
def rera(games, row):
if row.name+1 < games:
ip = df.IP[:row.name+1].sum()
er = df.ER[:row.name+1].sum()
else:
ip = df.IP[row.name+1-games:row.name+1].sum()
er = df.ER[row.name+1-games:row.name+1].sum()
return er/ip*9
The only problem with it is I called my `DataFrame` `games`, not `df`. Simple
enough, I'll just replace `df` with `games` and call it a day, right? Nope:
def rera(games, row):
if row.name+1 < games:
ip = games.IP[:row.name+1].sum()
er = games.ER[:row.name+1].sum()
else:
ip = games.IP[row.name+1-games:row.name+1].sum()
er = games.ER[row.name+1-games:row.name+1].sum()
return er/ip*9
When I try to run the code I get errors. Lots of them. This is because while i
made sure to update the `DataFrame` name to be correct I overlooked that the
function was using a parameter called `games` and `Python` got a bit confused
about what was what.
OK, round two, replace the parameter `games` with `games_t`:
def rera(games_t, row):
if row.name+1 < games_t:
ip = games.IP[:row.name+1].sum()
er = games.ER[:row.name+1].sum()
else:
ip = games.IP[row.name+1-games_t:row.name+1].sum()
er = games.ER[row.name+1-games_t:row.name+1].sum()
return er/ip*9
No more errors! Now we calculate the 3- and 4-game moving averages:
era4 = partial(rera, 4)
era3 = partial(rera,3)
and then add them to the `DataFrame`:
games['ERA4'] = games.apply(era4, axis=1)
games['ERA3'] = games.apply(era3, axis=1)
And print out a pretty graph:
plt.plot_date(games.date, games.ERA3, '-b', lw=2)
plt.plot_date(games.date, games.ERA4, '-r', lw=2)
plt.plot_date(games.date, games.GERA, '.k', ms=10)
plt.plot_date(games.date, games.ERA, '--k', lw=2)
plt.show()
Dr. Drang focused on Jake Arrieta (he is a Chicago guy after all), but I
thought it was be interested to look at the Graphs for Arrieta and the top 5
finishers in the NL Cy Young Voting (because Clayton Kershaw was 5th place and
I'm a Dodgers guy).
Here is the graph for [Jake
Arrieata](http://www.espn.com/mlb/player/gamelog/_/id/30145/jake-arrieta):

And here are the graphs for the top 5 finishers in Ascending order in the
[2016 NL Cy Young voting](http://bbwaa.com/16-nl-cy/):
[Max Scherzer](http://www.espn.com/mlb/player/gamelog/_/id/28976/max-scherzer)
winner of the 2016 NL [Cy Young
Award](https://en.wikipedia.org/wiki/Cy_Young_Award) 
[Jon Lester](http://www.espn.com/mlb/player/gamelog/_/id/28487/jon-lester)

[Kyle Hendricks](http://www.espn.com/mlb/player/gamelog/_/id/33173/kyle-
hendricks) 
[Madison Bumgarner](http://www.espn.com/mlb/player/gamelog/_/id/29949/madison-
bumgarner) 
[Clayton Kershaw](http://www.espn.com/mlb/player/gamelog/_/id/28963/clayton-
kershaw):

I've not spent much time analyzing the data, but I'm sure that it says
_something_. At the very least, it got me to wonder, 'How many 0 ER games did
each pitcher pitch?'
I also noticed that the stats include the playoffs (which I wasn't intending).
Another thing to look at later.
Legend:
* Black Dot - ERA on Date of Game
* Black Solid Line - Cumulative ERA
* Blue Solid Line - 3-game trailing average ERA
* Red Solid Line - 4-game trailing average ERA
Full code can be found on my [Github Repo](https://www.github.com/miloardot)
",2016-11-21,pitching-stats-and-python,"I'm an avid [Twitter](https://www.twitter.com) user, mostly as a replacement
[RSS](https://en.wikipedia.org/wiki/RSS) feeder, but also because I can't
stand [Facebook](https://www.facebook.com) and this allows me to learn about
really important world events when I need to and to just stay isolated with
[my head in the
sand](http://gerdleonhard.typepad.com/.a/6a00d8341c59be53ef013488b614d8970c-800wi)
when I don't. It's perfect for …
",Pitching Stats and Python,https://www.ryancheley.com/2016/11/21/pitching-stats-and-python/
ryan,technology,"OK, we’ve got our server ready for our Django App. We set up Gunicorn and
Nginx. We created the user which will run our app and set up all of the
folders that will be needed.
Now, we work on deploying the code!
## Deploying the Code
There are 3 parts for deploying our code:
1. Collect Locally
2. Copy to Server
3. Place in correct directory
Why don’t we just copy to the spot on the server we want o finally be in?
Because we’ll need to restart Nginx once we’re fully deployed and it’s easier
to have that done in 2 steps than in 1.
### Collect the Code Locally
My project is structured such that there is a `deploy` folder which is on the
Same Level as my Django Project Folder. That is to say

We want to clear out any old code. To do this we run from the same level that
the Django Project Folder is in
rm -rf deploy/*
This will remove ALL of the files and folders that were present. Next, we want
to copy the data from the `yoursite` folder to the deploy folder:
rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' yoursite/* deploy
Again, running this form the same folder. I’m using `rsync` here as it has a
really good API for allowing me to exclude items (I’m sure the above could be
done better with a mix of Regular Expressions, but this gets the jobs done)
### Copy to the Server
We have the files collected, now we need to copy them to the server.
This is done in two steps. Again, we want to remove ALL of the files in the
deploy folder on the server (see rationale from above)
ssh root@$SERVER ""rm -rf /root/deploy/""
Next, we use `scp` to secure copy the files to the server
scp -r deploy root@$SERVER:/root/
Our files are now on the server!
### Installing the Code
We have several steps to get through in order to install the code. They are:
1. Activate the Virtual Environment
2. Deleting old files
3. Copying new files
4. Installing Python packages
5. Running Django migrations
6. Collecting static files
7. Reloading Gunicorn
Before we can do any of this we’ll need to `ssh` into our server. Once that’s
done, we can proceed with the steps below.
Above we created our virtual environment in a folder called `venv` located in
`/home/yoursite/`. We’ll want to activate it now (1)
source /home/yoursite/venv/bin/activate
Next, we change directory into the yoursite home directory
cd /home/yoursite/
Now, we delete the old files from the last install (2):
rm -rf /home/yoursite/yoursite
Copy our new files (3)
cp -r /root/deploy/ /home/yoursite/yoursite
Install our Python packages (4)
pip install -r /home/yoursite/yoursite/requirements.txt
Run any migrations (5)
python /home/yoursite/yoursite/manage.py migrate
Collect Static Files (6)
python /home/yoursite/yoursite/manage.py collectstatic
Finally, reload Gunicorn
systemctl daemon-reload
systemctl restart gunicorn
When we visit our domain we should see our Django Site fn
",2021-02-14,preparing-the-code-for-deployment-to-digital-ocean,"OK, we’ve got our server ready for our Django App. We set up Gunicorn and
Nginx. We created the user which will run our app and set up all of the
folders that will be needed.
Now, we work on deploying the code!
## Deploying the Code
There are 3 …
",Preparing the code for deployment to Digital Ocean,https://www.ryancheley.com/2021/02/14/preparing-the-code-for-deployment-to-digital-ocean/
ryan,technology,"One of the great things about computers is their ability to take tabular data
and turn them into pictures that are easier to interpret. I'm always amazed
when given the opportunity to show data as a picture, more people don't jump
at the chance.
For example, [this piece on ESPN regarding the difference in officiating crews
and their calls](http://www.espn.com/blog/nflnation/post/_/id/225804/aaron-
rodgers-could-get-some-help-from-referee-jeff-triplette) has some great data
in it regarding how different officiating crews call games.
One thing I find a bit disconcerting is:
1. ~~One of the rows is missing data so that row looks 'odd' in the context of the story and makes it look like the writer missed a big thing ... they didn't~~ (it's since been fixed)
2. This tabular format is just begging to be displayed as a picture.
Perhaps the issue here is that the author didn't know how to best visualize
the data to make his story, but I'm going to help him out.
If we start from the underlying premise that not all officiating crews call
games in the same way, we want to see in what ways they differ.
The data below is a reproduction of the table from the article:
REFEREE DEF. OFFSIDE ENCROACH FALSE START NEUTRAL ZONE TOTAL
* * *
Triplette, Jeff 39 2 34 6 81 Anderson, Walt 12 2 39 10 63 Blakeman, Clete 13 2
41 7 63 Hussey, John 10 3 42 3 58 Cheffers, Cartlon 22 0 31 3 56 Corrente,
Tony 14 1 31 8 54 Steratore, Gene 19 1 29 5 54 Torbert, Ronald 9 4 31 7 51
Allen, Brad 15 1 28 6 50 McAulay, Terry 10 4 23 12 49 Vinovich, Bill 8 7 29 5
49 Morelli, Peter 12 3 24 9 48 Boger, Jerome 11 3 27 6 47 Wrolstad, Craig 9 1
31 5 46 Hochuli, Ed 5 2 33 4 44 Coleman, Walt 9 2 25 4 40 Parry, John 7 5 20 6
38
The author points out:
> > Jeff Triplette's crew has called a combined 81 such penalties -- 18 more
> than the next-highest crew and more than twice the amount of two others
The author goes on to talk about his interview with [Mike
Pereira](https://en.wikipedia.org/wiki/Mike_Pereira) (who happens to be
~~pimping~~ promoting his new book).
While the table above is _helpful_ it's not an image that you can look at and
ask, ""Man, what the heck is going on?"" There is a visceral aspect to it that
says, something is wrong here ... but I can't **really** be sure about what it
is.
Let's sum up the defensive penalties (Defensive Offsides, Encroachment, and
Neutral Zone Infractions) and see what the table looks like:
REFEREE DEF Total OFF Total TOTAL
* * *
Triplette, Jeff 47 34 81 Anderson, Walt 24 39 63 Blakeman, Clete 22 41 63
Hussey, John 16 42 58 Cheffers, Cartlon 25 31 56 Corrente, Tony 23 31 54
Steratore, Gene 25 29 54 Torbert, Ronald 20 31 51 Allen, Brad 22 28 50
McAulay, Terry 26 23 49 Vinovich, Bill 20 29 49 Morelli, Peter 24 24 48 Boger,
Jerome 20 27 47 Wrolstad, Craig 15 31 46 Hochuli, Ed 11 33 44 Coleman, Walt 15
25 40 Parry, John 18 20 38
Now we can see what might actually be going on, but it's still a bit hard for
those visual people. If we take this data and then generate a scatter plot we
might have a picture to show us the issue. Something like this:

The horizontal dashed blue lines represent the average defensive calls per
crew while the vertical dashed blue line represents the average offensive
calls per crew. The gray box represents the area containing plus/minus 2
standard deviations from the mean for both offensive and defensive penalty
calls.
Notice anything? Yeah, me too. Jeff Triplette's crew is so far out of range
for defensive penalties it's like they're watching a different game, or
reading from a different play book.
What I'd really like to be able to do is this same analysis but on a game by
game basis. I don't think this would really change the way that Jeff Triplette
and his crew call games, but it may point out some other inconsistencies that
are worth exploring.
Code for this project can be found on my [GitHub
Repo](https://github.com/miloardot/python-files/blob/master/Referees)
",2016-12-25,presenting-data-referee-crew-calls-in-the-nfl,"One of the great things about computers is their ability to take tabular data
and turn them into pictures that are easier to interpret. I'm always amazed
when given the opportunity to show data as a picture, more people don't jump
at the chance.
For example, [this piece on ESPN
…](http://www.espn.com/blog/nflnation/post/_/id/225804/aaron-rodgers-could-
get-some-help-from-referee-jeff-triplette)
",Presenting Data - Referee Crew Calls in the NFL,https://www.ryancheley.com/2016/12/25/presenting-data-referee-crew-calls-in-the-nfl/
ryan,technology,"At my job I work with some really talented Web Developers that are saddled
with a pretty creaky legacy system.
We're getting ready to start on a new(ish) project where we'll be taking an
old project built on this creaky legacy system (`VB.net`) and re-implementing
it on a `C#` backend and an `Angular` front end. We'll be working on a lot of
new features and integrations so it's worth rebuilding it versus shoehorning
the new requirements into the legacy system.
The details of the project aren't really important. What is important is that
as I was reviewing the requirements with the Web Developer Supervisor he said
something to the effect of, ""We can create a proof of concept and just hard
code the data in a json file to fake th backend.""
The issue is ... we already have the data that we'll need in a MS SQL database
(it's what is running the legacy version) it's just a matter of getting it
into the right json ""shape"".
Creating a 'fake' json object that kind of/maybe mimics the real data is
something we've done before, and it ALWAYS seems to bite us in the butt. We
don't account for proper pagination, or the real lengths of data in the fields
or NULL values or whatever shenanigans happen to befall real world data!
This got me thinking about [Simon Willison](https://simonwillison.net)'s
project [Datasette](https://datasette.io) and using it to prototype the API
end points we would need.
I had been trying to figure out how to use the `db-to-sqlite` to extract data
from a MS SQL database into a SQLite database and was successful (see my PR to
`db-to-sqlite` [here](https://github.com/ryancheley/db-to-
sqlite/tree/ryancheley-patch-1-document-updates#using-db-to-sqlite-with-ms-
sql))
With this idea in hand, I reviewed it with the Supervisor and then scheduled a
call with the web developers to review `datasette`.
During this meeting, I wanted to review:
1. The motivation behind why we would want to use it
2. How we could leverage it to do [Rapid Prototyping](https://datasette.io/for/rapid-prototyping)
3. Give a quick demo data from the stored procedure that did the current data return for the legacy project.
In all it took less than 10 minutes to go from nothing to a local instance of
`datasette` running with a prototype JSON API for the web developers to see.
I'm hoping to see the Web team use this concept more going forward as I can
see huge benefits for Rapid Prototyping of ideas, especially if you already
have the data housed in a database. But even if you don't, `datasette` has
tons of [tools](https://datasette.io/tools) to get the data from a variety of
sources into a SQLite database to use and then you can do the rapid
prototyping!
",2021-08-09,prototyping-with-datasette,"At my job I work with some really talented Web Developers that are saddled
with a pretty creaky legacy system.
We're getting ready to start on a new(ish) project where we'll be taking an
old project built on this creaky legacy system (`VB.net`) and re-implementing
it on a …
",Prototyping with Datasette,https://www.ryancheley.com/2021/08/09/prototyping-with-datasette/
ryan,technology,"There are a lot of different ways to get the content for your Pelican site
onto the internet. The [Docs
show](https://docs.getpelican.com/en/latest/publish.html) an example using
`rsync`.
For automation they talk about the use of either `Invoke` or `Make` (although
you could also use [`Just`](https://github.com/casey/just) instead of `Make`
which is my preferred command runner.)
I didn't go with any of these options, instead opting to use GitHub Actions
instead.
I have [two GitHub
Actions](https://github.com/ryancheley/ryancheley.com/tree/main/.github/workflows)
that will publish updated content. One action publishes to a UAT version of
the site, and the other to the Production version of the site.
Why two actions you might ask?
Right now it's so that I can work through making my own theme and deploying it
without disrupting the content on my production site. Also, it's a workflow
that I'm pretty used to:
1. Local Development
2. Push to Development Branch on GitHub
3. Pull Request into Main on GitHub
It kind of complicates things right now, but I feel waaay more comfortable
with having a UAT version of my site that I can just undo if I need to.
Below is the code for the [Prod
Deployment](https://raw.githubusercontent.com/ryancheley/ryancheley.com/main/.github/workflows/publish.yml)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
name: Pelican Publish
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.2
with:
host: ${{ secrets.SSH_HOST }}
key: ${{ secrets.SSH_KEY }}
username: ${{ secrets.SSH_USERNAME }}
script: |
rm -rf ryancheley.com
git clone git@github.com:ryancheley/ryancheley.com.git
source /home/ryancheley/venv/bin/activate
cp -r ryancheley.com/* /home/ryancheley/
cd /home/ryancheley
pip install -r requirements.txt
pelican content -s publishconf.py
---|---
Let's break it down a bit
Lines 3 - 6 are just indicating when the actually perform the actions in the
lines below.
In line 13 I invoke the `appleboy/ssh-action@v0.1.2` which allows me to ssh
into my server and then run some command line functions.
On line 20 I remove the folder where the code was previously cloned from, and
in line 21 I run the `git clone` command to download the code
Line 23 I activate my virtual environment
Line 25 I copy the code from the cloned repo into the directory of my site
Line 27 I change directory into the source for the site
Line 29 I make any updates to requirements with `pip install`
Finally, in line 31 I run the command to publish the content (which takes my
`.md` files and turns them into HTML files to be seen on the internet)
",2021-07-07,publishing-content-to-pelican-site,"There are a lot of different ways to get the content for your Pelican site
onto the internet. The [Docs
show](https://docs.getpelican.com/en/latest/publish.html) an example using
`rsync`.
For automation they talk about the use of either `Invoke` or `Make` (although
you could also use [`Just`](https://github.com/casey/just) instead of `Make`
which is my preferred …
",Publishing content to Pelican site,https://www.ryancheley.com/2021/07/07/publishing-content-to-pelican-site/
ryan,technology,"With the most recent release of the iOS app [Workflow](https://workflow.is) I
was toying with the idea of writing a workflow that would allow me to update /
add a file to a [GitHub repo](https://github.com) via a workflow.
My thinking was that since [Pythonista](http://omz-software.com/pythonista/)
is only running local files on my iPad if I could use a workflow to access the
api elements to push the changes to my repo that would be pretty sweet.
In order to get this to work I'd need to be able to accomplosh the following
things (not necessarily in this order)
* Have the workflow get a list of all of the repositories in my GitHub
* Get the current contents of the app to the clip board
* Commit the changes to the master of the repo
I have been able to write a
[Workflow](https://workflow.is/workflows/8e986867ff074dbe89c7b0bf9dcb72f5)
that will get all of the public repos of a specified github user. Pretty
straight forward stuff.
The next thing I'm working on getting is to be able to commit the changes from
the clip board to a specific file in the repo (if one is specified) otherwise
a new file would be created.
I really just want to 'have the answer' for this, but I know that the journey
will be the best part of getting this project completed.
So for now, I continue to read the [GitHub API
Documentation](https://developer.github.com/v3/) to discover exactly how to do
what I want to do.
",2016-10-29,pushing-changes-from-pythonista-to-github-step1,"With the most recent release of the iOS app [Workflow](https://workflow.is) I
was toying with the idea of writing a workflow that would allow me to update /
add a file to a [GitHub repo](https://github.com) via a workflow.
My thinking was that since [Pythonista](http://omz-software.com/pythonista/)
is only running local files on my iPad …
",Pushing Changes from Pythonista to GitHub - Step 1,https://www.ryancheley.com/2016/10/29/pushing-changes-from-pythonista-to-github-step1/
ryan,technology,"Every month I set up a budget for my family so that we can track our spending
and save money in the ways that we need to while still being able to enjoy
life.
I have a couple of Siri Shortcuts that will take a picture and then put that
picture into a folder in Dropbox. The reason that I have a couple of them is
that one is for physical receipts that we got at a store and the other is for
online purchases. I’m sure that these couple be combined into one, but I
haven’t done that yet.
One of the great things about these shortcuts is that they will create the
folder that the image will go into if it’s not there. For example, the first
receipt of March 2019 will create a folder called **March** in the **2019**
folder. If the **2019** folder wasn’t there, it would have created it too.
What it doesn’t do is create the sub folder that all of my processed receipts
will go into. Each month I need to create a folder called `month_name`
Processed. And each month I think, there must be a way I can automate this,
but because it doesn’t really take that long I’ve never really done it.
Over the weekend I finally had the time to try and write it up and test it
out. Nothing too fancy, but it does what I want it to do, and a little more.
# create the variables I'm going to need later
y=$( date +""%Y"" )
m=$( date +""%B"" )
p=$( date +""%B_Processed"" )
# check to see if the Year folder exists and if it doesn't, create it
if [ ! -d /Users/ryan/Dropbox/Family/Financials/$y ]; then
mkdir /Users/ryan/Dropbox/Family/Financials/$y
fi
# check to see if the Month folder exists and if it doesn't, create it
if [ ! -d /Users/ryan/Dropbox/Family/Financials/$y/$m ]; then
mkdir /Users/ryan/Dropbox/Family/Financials/$y/$m
fi
#check to see if the Month_Processed folder exists and if it doesn't, create it
if [ ! -d ""/Users/ryan/Dropbox/Family/Financials/$y/$m/$p"" ]; then
mkdir ""/Users/ryan/Dropbox/Family/Financials/$y/$m/$p""
fi
The last section I use the double quotes “” around the directory name so that
I can have a space in the name of the processed folder. Initially I had used
an underscore but that’s not how I do it in real life when creating the sub
directors, so I had to do a bit of googling and found a helpful
[resource](https://ubuntuforums.org/showthread.php?t=1962625).
The only thing left to do at this point is get it set up to run automatically
so I don’t have to do anything.
In order to do that I needed to add the following to my cronjob:
0 5 1 * * /Users/ryan/Documents/scripts/create_monthly_expense_folders.sh
And now I will have my folder structure created for me automatically on the
first of the month at 5am!
",2019-03-16,receipts,"Every month I set up a budget for my family so that we can track our spending
and save money in the ways that we need to while still being able to enjoy
life.
I have a couple of Siri Shortcuts that will take a picture and then put that …
",Receipts,https://www.ryancheley.com/2019/03/16/receipts/
ryan,technology,"When I scheduled my last post on December 14th to be published at 6pm that
night I noticed that the schedule time was a bit … off:

I realized that the server times as still set to GMT and that I had missed the
step in the Linode Getting Started guide to Set the Timezone.
No problem, just found the Guide, went to
[this](https://linode.com/docs/getting-started/#set-the-timezone ""Set the
Timezone"") section and ran the following command:
`sudo dpkg-reconfigure tzdata`
I then selected my country (US) and my time zone (Pacific-Ocean) and now the
server has the right timezone.
",2017-12-15,setting-the-timezone-on-my-server,"When I scheduled my last post on December 14th to be published at 6pm that
night I noticed that the schedule time was a bit … off:

I realized that the server times as still set to GMT and that I had missed the
step in the Linode Getting Started guide …
",Setting the Timezone on my server,https://www.ryancheley.com/2017/12/15/setting-the-timezone-on-my-server/
ryan,technology,"In a [previous post](/itfdb.html) I wrote about my Raspberry Pi experiment to
have the SenseHat display a scrolling message 10 minutes before game time.
One of the things I have wanted to do since then is have Vin Scully’s voice
come from a speaker and say those five magical words, `It's time for Dodger
Baseball!`
I found a clip of [Vin on
Youtube](https://www.youtube.com/watch?v=4KwFuGtGU6c) saying that (and a
little more). I wasn’t sure how to get the audio from that YouTube clip
though.
After a bit of googling1 I found a command line tool called [youtube-
dl](https://rg3.github.io/youtube-dl/). The tool allowed me to download the
video as an `mp4` with one simple command:
youtube-dl https://www.youtube.com/watch?v=4KwFuGtGU6c
Once the mp4 was downloaded I needed to extract the audio from the `mp4` file.
Fortunately, `ffmpeg` is a tool for just this type of exercise!
I modified [this answer from
StackOverflow](https://stackoverflow.com/questions/9913032/ffmpeg-to-extract-
audio-from-video) to meet my needs
ffmpeg -i dodger_baseball.mp4 -ss 00:00:10 -t 00:00:9.0 -q:a 0 -vn -acodec copy dodger_baseball.aac
This got me an `aac` file, but I was going to need an `mp3` to use in my
Python script.
Next, I used a [modified version of this
suggestion](https://askubuntu.com/questions/35457/converting-aac-to-mp3-via-
command-line) to create write my own command
ffmpeg -i dodger_baseball.aac -c:a libmp3lame -ac 2 -b:a 190k dodger_baseball.mp3
I could have probably combined these two steps, but … meh.
OK. Now I have the famous Vin Scully saying the best five words on the planet.
All that’s left to do is update the python script to play it. Using guidance
from [here](https://raspberrypi.stackexchange.com/questions/7088/playing-
audio-files-with-python) I updated my `itfdb.py` file from this:
if month_diff == 0 and day_diff == 0 and hour_diff == 0 and 0 >= minute_diff >= -10:
message = '#ITFDB!!! The Dodgers will be playing {} at {}'.format(game.game_opponent, game.game_time)
sense.show_message(message, scroll_speed=0.05)
To this:
if month_diff == 0 and day_diff == 0 and hour_diff == 0 and 0 >= minute_diff >= -10:
message = '#ITFDB!!! The Dodgers will be playing {} at {}'.format(game.game_opponent, game.game_time)
sense.show_message(message, scroll_speed=0.05)
os.system(""omxplayer -b /home/pi/Documents/python_projects/itfdb/dodger_baseball.mp3"")
However, what that does is play Vin’s silky smooth voice every minute for 10
minutes before game time. Music to my ears but my daughter was not a fan, and
even my wife who LOVES Vin asked me to change it.
One final tweak, and now it only plays at 5 minutes before game time and 1
minute before game time:
if month_diff == 0 and day_diff == 0 and hour_diff == 0 and 0 >= minute_diff >= -10:
message = '#ITFDB!!! The Dodgers will be playing {} at {}'.format(game.game_opponent, game.game_time)
sense.show_message(message, scroll_speed=0.05)
if month_diff == 0 and day_diff == 0 and hour_diff == 0 and (minute_diff == -1 or minute_diff == -5):
os.system(""omxplayer -b /home/pi/Documents/python_projects/itfdb/dodger_baseball.mp3"")
Now, for the rest of the season, even though Vin isn’t calling the games, I’ll
get to hear his voice letting me know, “It’s Time for Dodger Baseball!!!”
1. Actually, it was an embarrassing amount ↩︎
",2018-03-15,setting-up-itfdb-with-a-voice,"In a [previous post](/itfdb.html) I wrote about my Raspberry Pi experiment to
have the SenseHat display a scrolling message 10 minutes before game time.
One of the things I have wanted to do since then is have Vin Scully’s voice
come from a speaker and say those five magical …
",Setting up ITFDB with a voice,https://www.ryancheley.com/2018/03/15/setting-up-itfdb-with-a-voice/
ryan,technology,"A [Jupyter Notebook](http://jupyter.org) is an open-source web application
that allows you to create and share documents that contain live code,
equations, visualizations and narrative text.
Uses include:
1. data cleaning and transformation
2. numerical simulation
3. statistical modeling
4. data visualization
5. machine learning
6. and other stuff
I’ve been interested in how to set up a Jupyter Notebook on my
[Linode](https://www.linode.com) server for a while, but kept running into a
roadblock (either mental or technical I’m not really sure).
Then I came across this ‘sweet’ solution to get them set up
at
My main issue was what I needed to to do keep the Jupyter Notebook running
once I disconnected from command line. The solution above gave me what I
needed to solve that problem
nohup jupyter notebook
`nohup` allows you to disconnect from the terminal but keeps the command
running in the background (which is exactly what I wanted).
The next thing I wanted to do was to have the `jupyter` notebook server run
from a directory that wasn’t my home directory.
To do this was way easier than I thought. You just run `nohup jupyter
notebook` from the directory you want to run it from.
The last thing to do was to make sure that the notebook would start up with a
server reboot. For that I wrote a shell script
# change to correct directory
cd /home/ryan/jupyter
nohup jupyter notebook &> /home/ryan/output.log
The last command is a slight modification of the line from above. I really
wanted the output to get directed to a file that wasn’t in the directory that
the `Jupyter` notebook would be running from. Not any reason (that I know of
anyway) … I just didn’t like the `nohup.out` file in the working directory.
Anyway, I now have a running Jupyter Notebook at
1
1. I’d like to update this to be running from a port other than 8888 AND I’d like to have it on SSL, but one thing at a time! ↩︎
",2018-05-27,setting-up-jupyter-notebook-on-my-linode,"A [Jupyter Notebook](http://jupyter.org) is an open-source web application
that allows you to create and share documents that contain live code,
equations, visualizations and narrative text.
Uses include:
1. data cleaning and transformation
2. numerical simulation
3. statistical modeling
4. data visualization
5. machine learning
6. and other stuff
I’ve been interested in how to set …
",Setting up Jupyter Notebook on my Linode,https://www.ryancheley.com/2018/05/27/setting-up-jupyter-notebook-on-my-linode/
ryan,technology,"If you want to have more than 1 Django site on a single server, you can. It’s
not too hard, and using the Digital Ocean tutorial as a starting point, you
can get there.
Using [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-
set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) as a start, we
set up so that there are multiple Django sites being served by `gunicorn` and
`nginx`.
## Creating `systemd` Socket and Service Files for Gunicorn
The first thing to do is to set up 2 Django sites on your server. You’ll want
to follow the tutorial referenced above and just repeat for each.
Start by creating and opening two systemd socket file for Gunicorn with sudo
privileges:
Site 1
sudo vim /etc/systemd/system/site1.socket
Site 2
sudo vim /etc/systemd/system/site2.socket
The contents of the files will look like this:
[Unit]
Description=siteX socket
[Socket]
ListenStream=/run/siteX.sock
[Install]
WantedBy=sockets.target
Where `siteX` is the site you want to server from that socket
Next, create and open a systemd service file for Gunicorn with sudo privileges
in your text editor. The service filename should match the socket filename
with the exception of the extension
sudo vim /etc/systemd/system/siteX.service
The contents of the file will look like this:
[Unit]
Description=gunicorn daemon
Requires=siteX.socket
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=path/to/directory
ExecStart=path/to/gunicorn/directory
--access-logfile -
--workers 3
--bind unix:/run/gunicorn.sock
myproject.wsgi:application
[Install]
WantedBy=multi-user.target
Again `siteX` is the socket you want to serve
Follow tutorial for testing Gunicorn
## Nginx
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /path/to/project;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/siteX.sock;
}
}
Again `siteX` is the socket you want to serve
Next, link to enabled sites
Test Nginx
Open firewall
Should now be able to see sites at domain names
",2021-03-07,setting-up-multiple-django-sites-on-a-digital-ocean-server,"If you want to have more than 1 Django site on a single server, you can. It’s
not too hard, and using the Digital Ocean tutorial as a starting point, you
can get there.
Using [this tutorial](https://www.digitalocean.com/community/tutorials/how-to-
set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) as a start, we
set up so that there are multiple Django …
",Setting up multiple Django Sites on a Digital Ocean server,https://www.ryancheley.com/2021/03/07/setting-up-multiple-django-sites-on-a-digital-ocean-server/
ryan,technology,"## The initial setup
Digital Ocean has a pretty nice API which makes it easy to automate the
creation of their servers (which they call `Droplets`. This is nice when
you’re trying to work towards automation of the entire process (like I was).
I won’t jump into the automation piece just yet, but once you have your DO
account setup (sign up [here](https://m.do.co/c/cc5fdad15654) if you don’t
have one), it’s a simple interface to [Setup Your
Droplet](https://www.digitalocean.com/docs/droplets/how-to/create/).
I chose the Ubuntu 18.04 LTS image with a \$5 server (1GB Ram, 1CPU, 25GB SSD
Space, 1000GB Transfer) hosted in their San Francisco data center (SFO21).
## We’ve got a server … now what?
We’re going to want to update, upgrade, and install all of the (non-Python)
packages for the server. For my case, that meant running the following:
apt-get update
apt-get upgrade
apt-get install python3 python3-pip python3-venv tree postgresql postgresql-contrib nginx
That’s it! We’ve now got a server that is ready to be setup for our Django
Project.
In the next post, I’ll walk through how to get your Domain Name to point to
the Digital Ocean Server.
1. SFO2 is disabled for new customers and you will now need to use SFO3 unless you already have resources on SFO2, but if you’re following along you probably don’t. What’s the difference between the two? Nothing 😁 ↩︎
",2021-01-31,setting-up-the-server-on-digital-ocean,"## The initial setup
Digital Ocean has a pretty nice API which makes it easy to automate the
creation of their servers (which they call `Droplets`. This is nice when
you’re trying to work towards automation of the entire process (like I was).
I won’t jump into the automation …
",Setting up the Server (on Digital Ocean),https://www.ryancheley.com/2021/01/31/setting-up-the-server-on-digital-ocean/
ryan,technology,"# Creating the user on the server
Each site on my server has it's own user. This is a security consideration,
more than anything else. For this site, I used the steps from [some of my
scripts for setting up a Django
site](https://www.ryancheley.com/2021/02/21/automating-the-deployment/). In
particular, I ran the following code from the shell on the server:
adduser --disabled-password --gecos """" ryancheley
adduser ryancheley www-data
The first command above creates the user with no password so that they can't
actually log in. It also creates the home directory `/home/ryancheley`. This
is where the site will be server from.
The second commands adds the user to the `www-data` group. I don't think
that's strictly necessary here, but in order to keep this user consistent with
the other web site users, I ran it to add it to the group.
# Creating the nginx config file
For the most part I cribbed the `nginx` config files from this [blog
post](https://michael.lustfield.net/nginx/blog-with-pelican-and-nginx).
There were some changes that were required though. As I indicated in part 1, I
had several requirements I was trying to fulfill, most notably not breaking
historic links.
Here is the config file for my UAT site (the only difference between this and
the prod site is the server name on line 3):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
|
server {
server_name uat.ryancheley.com;
root /home/ryancheley/output;
location / {
# Serve a .gz version if it exists
gzip_static on;
error_page 404 /404.html;
rewrite ^/index.php/(.*) /$1 permanent;
}
location = /favicon.ico {
# This never changes, so don't let it expire
expires max;
}
location ^~ /theme {
# This content should very rarely, if ever, change
expires 1y;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/uat.ryancheley.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/uat.ryancheley.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = uat.ryancheley.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen [::]:80;
listen 80;
server_name uat.ryancheley.com;
return 404; # managed by Certbot
}
---|---
The most interesting part of the code above is the `location` block from lines
6 - 11.
location / {
# Serve a .gz version if it exists
gzip_static on;
error_page 404 /404.html;
rewrite ^/index.php/(.*) /$1 permanent;
}
## Custom 404 Page
error_page 404 /404.html;
This line is what allows me to have a custom
[404](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404) error
page. If a page is not found `nginx` will serve up the html page `404.html`
which is generated by a markdown file in my pages directory and looks like
this:
Title: Not Found
Status: hidden
Save_as: 404.html
The requested item could not be located.
I got this implementation idea from the [Pelican
docs](https://docs.getpelican.com/en/4.6.0/tips.html?highlight=404#custom-404-pages).
## Rewrite rule for index.php in the URL
rewrite ^/index.php/(.*) /$1 permanent;
The rewrite line fixes the `index.php` challenge I mentioned in the [previous
post](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/)
It took me a _really_ long time to figure this out because the initial config
file had a `location` block that looked like this:
1
2
3
4
5
|
location = / {
# Instead of handling the index, just
# rewrite / to /index.html
rewrite ^ /index.html;
}
---|---
I didn't recognize the `location = / {` on line 1 as being different than the
`location` block above starting at line 6. So I added
rewrite ^/index.php/(.*) /$1 permanent;
to that block and it NEVER worked because it never could.
The `=` in the location block indicates a literal exact match, which the
regular expression couldn't do because it's trying to be dynamic, but the `=`
indicates static 🤦🏻♂️
OK, we've got a user, and we've got a configuration file, now all we need is a
way to get the files to the server.
I'll go over that in the next post.
",2021-07-05,setting-up-the-server-to-host-pelican,"# Creating the user on the server
Each site on my server has it's own user. This is a security consideration,
more than anything else. For this site, I used the steps from [some of my
scripts for setting up a Django
site](https://www.ryancheley.com/2021/02/21/automating-the-deployment/). In
particular, I ran the following code from …
",Setting up the Server to host my Pelican Site,https://www.ryancheley.com/2021/07/05/setting-up-the-server-to-host-pelican/
ryan,technology,"I’ve written about my migration from Squarespace to Wordpress earlier this
year. One thing I lost with that migration when I went to Wordpress in AWS was
having SSL available. While I’m sure Van Hoet will “well actually” me on this,
I never could figure out how to set it up ( not that I tried particularly hard
).
The thing is now that I’m hosting on Linode I’m finding some really useful
tutorials. This one showed me exactly what I needed to do to get it set up.
Like any good planner I read the how to several times and convinced myself
that it was actually relatively straight forward to do and so I started.
## Step 1 Creating the cert files
Using [this tutorial](https://www.linode.com/docs/security/ssl/create-a-self-
signed-certificate-on-debian-and-ubuntu ""Creating Self Signed Certificates on
Ubuntu"")I was able to create the required certificates to set up SSL. Of
course, I ran into an issue when trying to run this command
`chmod 400 /etc/ssl/private/example.com.key`
I did not have persmision to chmod on that file. After a bit of Googling I
found that I can switch to interactive root mode by running the command
`sudo -i`
It feels a bit dangerous to be able to just do that (I didn’t have to enter a
password) but it worked.
## Step 2
OK, so the tutorial above got me most(ish) of the way there, but I needed to
sign my own certificate. For that I used this
[tutorial](https://www.linode.com/docs/security/ssl/install-lets-encrypt-to-
create-ssl-certificates ""SSL""). I followed the directions but kept coming up
with an error:
`Problem binding to port 443: Could not bind to the IPv4 or IPv6`
I rebooted my Linode server. I restarted apache. I googled and I couldn’t find
the answer I was looking for.
I wanted to give up, but tried Googling one more time. Finally! An answer so
simple it couldn’t work. But then it did.
Stop Apache, run the command to start Apache back up and boom. The error went
away and I had a certificate.
However, when I tested the site using [SSL
Labs](https://www.ssllabs.com/ssltest/analyze.html ""Analyze my SSL"")I was
still getting an error / warning for an untrusted site.
🤦🏻♂️
—
OK ... take 2
I nuked my linode host to start over again.
First things first ... we need to needed to [secure my
server](https://linode.com/docs/security/securing-your-server/ ""Securing Your
Server""). Next, we need to set up the server as a LAMP and Linode has [this
tutorial](https://linode.com/docs/web-servers/lamp/install-lamp-stack-on-
ubuntu-16-04/ ""LAMP on Linode"") to walk me through the steps of setting it up.
I ran into an issue when I restarted the Apache service and realized that I
had set my host name but hadn’t update the hosts file. No problem though. Just
fire up `vim` and make the additional line:
`127.0.0.1 milo`
Next, I used [this tutorial](https://www.linode.com/docs/security/ssl/create-
a-self-signed-certificate-on-debian-and-ubuntu/ ""Self Signed Certificate on
Ubuntu"") to create a self signed certificate and [this to get the SSL to be
set up](https://www.linode.com/docs/security/ssl/ssl-apache2-debian-ubuntu/
""SSL Apache2 Ubuntu"").
One thing that I expected was that it would just work. After doing some more
reading what I realized was that a self signed certificate is useful for
internal applications. Once I realized this I decided to not redirect to SSL
(i.e. part 443) for my site but instead to just use the ssl certificate it
post from Ulysses securely.
Why go to all this trouble just too use a third party application to post to a
WordPress site? Because [Ulysses](https://www.ulyssesapp.com ""Ulysses"") is an
awesome writing app and I love it. If you’re writing and not using it, I’d
give it a try. It really is a nice app.
So really, no _good_ reason. Just that. And, I like to figure stuff out.
OK, so Ulysses is great. But why the need for an SSL certificate? Mostly
because when I tried to post to Wordpress from Ulysses without any
certificates ( self signed or not ) I would get a warning that my traffic was
unencrypted and could be snooped. I figured, better safe than sorry.
Now with the ssl cert all I had to do was trust my self signed certificate and
I was set1
1. Mostly. I still needed to specify the domain with www otherwise it didn’t work. ↩︎
",2017-12-15,setting-up-the-site-with-ssl,"I’ve written about my migration from Squarespace to Wordpress earlier this
year. One thing I lost with that migration when I went to Wordpress in AWS was
having SSL available. While I’m sure Van Hoet will “well actually” me on this,
I never could figure out how to …
",Setting up the site with SSL,https://www.ryancheley.com/2017/12/15/setting-up-the-site-with-ssl/
ryan,technology,"Last October I gave my first honest to goodness, on my own, up on the stage by
myself talk at a tech conference. It was the most stressful yet fulfilling
professional experience I've had.
Fulfilling in that I've wanted to get better at speaking in public and this
helped in that goal.
Stressful in that I really wanted to do a good job and wasn't sure that I
could, or worse, that anyone would care about what I had to say.
Well, neither of those things turned out to be true. I did get a lot of good
feedback which tells me I did a good job, and people were very encouraging for
the words that I had to say, so people did care.
My presentation went so well that I was even [interviewed by Jay
Miller](https://youtu.be/WkeRI7LkBeY?si=gIgeMODD3aQJsfvX).
You can see my actual talk
[here](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO), but I thought it
would also be interesting for you to see how I got here.
## Submitting the idea
I submitted my talk idea for DCUS 2023 in May and it was selected in June.
That gave me roughly 3 months to get my loose outline of an idea into a 45
minute talk.
## Brain storming how the talk would go
I have tried to get a better workflow for brainstorming ideas in general, but
I really wanted to up my game for this talk. To that end I used the [Story
Teller Tactics](https://pipdecks.com/pages/storyteller-tactics-card-deck)
cards to help determine the path of the story I would tell in my presentation.
That helped when I got to mind mapping1 my talk.

The use of the Story Teller Tactics, combined with my mind map, lead to a
starting point for creating my presentation
## My 'Oh Sh%t moment'
Back in early July I was browsing Mastodon (instead of working on my
presentation) and came across a link to an article with the title [How To
Become A Better Speaker At
Conferences](https://www.smashingmagazine.com/2023/07/become-better-speaker-
conferences/). I saved it to my read it later service and went on browsing. A
few weeks later I actually read the article (around July 25).
This bit of advice got me a little worried:
> On a practical level, a 45-minute talk can take a **surprisingly long time
> to put together**. I reckon it takes me at least an hour of preparation for
> every minute of content.
Yikes! That means I would need to prepare and rehearse and prepare for 45
hours over the course of 14 weeks (almost 4 hours per week on average). So I
set up a schedule for how I would meet this requirement
## Working on the presentation
I spent all of August and the early part of September working on my
presentation for about 3 hours a week. Adding slight tweaks to it here and
there. I conducted a dry run with my team at work to assess my presentation's
progress.
The dry run went _fine_ , but it was clear that my presentation was missing
_something_.
### Asking for feedback
Django Con offers up mentors to help you work on your talk. If you're newer to
giving presentations, I **highly** recommend engaging with one of them. The
feedback they provide is priceless!
I had the great fortune to reach out to [Katie
McLaughlin](https://cloudisland.nz/@glasnt) who had just given a talk at PyCon
Australia titled [Present Like a
Pro](https://www.youtube.com/watch?v=YMcx35RGzYM). I watched that before
reaching out to her and she gave some very good advice on presentation.
### Getting serious about preparing for the talk
As I said, I had done a dry run with my team at work and it was _fine_ but I
could tell that the way I was working on my presentation wasn't getting it to
where I wanted it to be. So I decided to go all in on practicing and trying to
make it the best I could. In order to accomplish that I believe I would need
to engage in **deliberate** practice by actually giving my presentation. This
was a breakthrough moment for me in improving the presentation and my deliver
of it.
In order to have deliberate practice I set up the following routine:
1. Give my presentation and RECORD it
2. Watch my presentation and make notes about what needed to be improved
3. Update my presentation based on the notes from #2
4. Go back to step 1
Steps 1 - 3 were done on different days. For example, on Monday I would record
me giving the presentation; on Tuesday I would watch the presentation and make
notes; on Wednesday I would update the presentation based on my notes from
Tuesday; on Thursday I would start over.
I did this a total of **7** times over 21 days. Two of these times when giving
the presentation I gave it to a 'live' audience and was able to get feedback
from them on various parts of the presentation.
I did one final dry run on October 13th (the Friday before my presentation was
to happen for _real_ )
That Friday was the _last_ time I even looked at my presentation before giving
it. I know some people will talk about making updates on the plane ride to the
conference, or the night before, or the hour before, but that would stress the
crap out of me, and I was already stressed out enough!
## Giving my talk for Real
On Monday October 16th I gave my talk in front of people, in real life, for
the first time. Here I am up on stage with the crowd in the background

All in all it was a really fulfilling experience, but it was pretty hard too.
This was the _first_ time I spoke at a Tech Conference and I really wanted to
do well.
As I said before, I received some really good feedback on the talk and I was
really glad to have done it.
Now, you might ask, ""Would I have to go to all of this trouble to prepare for
a talk?""
Maybe, maybe not. I just happened to find this particular prep process worked
well for my brain.
It was nice to hear from some of the attendees surprised that my talk was the
first honest to goodness talk on my own I had ever given because it sounded so
polished and well done.
Practice makes better, and in this case (based on the videos) it sure did for
me
## A Big Thank you
A presentation like this took a lot out of me, but I am extremely grateful to
a few people in particular:
* [Katie McLaughlin](https://cloudisland.nz/@glasnt)
* The Team of Web Developers at work
* Bookie
* Chris
* Jason
* Jon
* My daughter Abigail
## A little bit extra
If you want to see more details on my talk, here is a [playlist of the dry run
attempts](https://www.youtube.com/playlist?list=PLMHsf-A9W6iXadPsvD-7Efqo860LpFpxP)
I did to prepare2
If you want to see the repo where the changes were tracked for my presentation
it can be found [here](https://github.com/ryancheley/djangocon-us-2023)
If you want to see my annotated slides, you can find them
[here](https://annotated-notes.ryancheley.com/dcus2023/annotated-slides.html)
## Time tracking
I time track the crap out of my work day, and I _really_ wish I would have
done that here just to get a more exact idea of how much time I spent
preparing, but some basic back of the envelope math gives me nearly 56 hours
of prep for this. One thing I do religiously for work is track my time. I do
this for a couple of reasons, but for some reason, I didn't do that as I
prepared for my talk here. The time I have here is mostly estimates based on
my memory (which could be wildly over stated, or understated).
Activity | Time Spent
---|---
Story Teller Tactics Work | 1.5 Hours
Mind Mapping Talk | 3 hours
Initial Draft of Presentation | 5 Hours
Presentation Updates | 18 Hours
Deliberate Practice | 28 Hours
**Total Time** | **55.5 hours**
* * *
1. I use MindNode for mind mapping, mostly on my iPad ↩︎
2. Be careful, there is at least one not-safe-for-work word in one of the early videos. ↩︎
",2023-12-15,so-you-want-to-give-a-talk-at-a-conference,"Last October I gave my first honest to goodness, on my own, up on the stage by
myself talk at a tech conference. It was the most stressful yet fulfilling
professional experience I've had.
Fulfilling in that I've wanted to get better at speaking in public and this
helped in …
",So you want to give a talk at a conference?,https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-talk-at-a-conference/
ryan,technology,"If you want to access a server in a 'passwordless' way, the best approach I
know is to use SSH Keys. This is great, but what does that mean and how do you
set it up?
I'm going to attempt to write out the steps for getting this done.
Let's assume we have two servers, `web1` and `web2`. These two servers have 1
non-root user which I'll call `user1`.
So we have something like this
* `user1@web1`
* `user1@web2`
Suppose we want to allow user1 from web2 to access web1.
At a high level, we need to allow SSH access to web1 for user1 on web2 we need
to:
1. Create `user1` on `web1`
2. Create `user1` on `web2`
3. Create SSH keys on `web2` for `user1`
4. Add the public key for `user1` from `web2` to onto the `authorized_keys` for for `user1` on `web1`
OK, let's try this. I am using DigitalOcean and will be taking advantage of
their CLI tool `doctl`
To create a droplet, there are two required arguments.:
* image
* size
I'm also going to include a few other options
* tag
* region
* ssh-keys1
Below is the command to use to create a server called `web-t-001`
doctl compute droplet create web-t-001 \
--image ubuntu-24-04-x64 \
--size s-1vcpu-1gb \
--enable-monitoring \
--region sfo2 \
--tag-name test \
--ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join("","")')
and to create a server called `web-t-002`
doctl compute droplet create web-t-002 \
--image ubuntu-24-04-x64 \
--size s-1vcpu-1gb \
--enable-monitoring \
--region sfo2 \
--tag-name test \
--ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join("","")')
The values for the `ssh-keys` above will get all of the ssh-keys I have stored
at DigitalOcean and add them.
The output looks something like:
> \--ssh-keys 1234, 4567, 6789, 1222
Now that we've created two droplets called `web-t-001` and `web-t-002` we can
set up user1 on each of the servers.
I'll SSH as root into each of the servers and create `user1` on each (I can do
this because of the ssh keys that were added as part of the droplet creation)
* `adduser --disabled-password --gecos ""User 1"" user1 --home /home/user1`
I then switch to `user1`
`su user1`
and run this command
`ssh-keygen -q -t rsa -b 2048 -f /home/user1/.ssh/id_rsa -N ''`
This will generate two files in `~/.ssh without any prompts`:
* `id_rsa`
* `id_rsa.pub`
The `id_rsa` identifies the computer. This is the Private Key. It should NOT
be shared with anyone!
The `id_rsa.pub`. This is the Public Key. It CAN be shared.
The contents of the `id_rsa.pub` file will be used for the `authorized_keys`
file on the computer this user will SSH into.
OK, what does this actually look like?
On `web-t-002`, I get the content of `~/.ssh/id_rsa.pub` for `user1` and copy
it to my clipboard.
Then from `web-t-001` as `user1` I paste that into the the `authorized_keys`
in `~/.ssh`. If the file isn't already there, it needs to be created.
This tells `web-t-001` that a computer with the private key that matches the
public key is allowed to access as `user1`
The implementation of this is to be on `web-t-002` as `user1` and then run the
command
ssh user1@web-t-001
The first time this is done, a prompt will come up that looks like this:
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
ED25519 key fingerprint is SHA256:....
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
This ensure that you know which computer you're connecting to and you want to
continue. This helps to prevent potential man-in-the-middle attacks. When you
type `yes` this creates a file called `known_hosts`
OK, so where are we at? The table below shows the files, their content, and
their servers
Server | id_rsa | id_rsa.pub | authorized_keys | known_hosts
---|---|---|---|---
web-t-001 | private key | ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQDFwHs8VKWWSH737fVz4cs+5Eq8OcRJRf2ti0ytaChM1ySh2+olcKokHao3fl5G+ZZv4pQeKfCh8ClFP86g7rZN1evu2EFVlmBo1Ked4IwF4UBY2+rnfZmvxeHd+smtyZgfVZI/6ySfe1D+inAqv7otsMsNRRuE4aG0DNEJ39qwFxukGNcDXk9RNVvmwbCc5zT/HN0yMJ6Y7KtfPZgjl5v854VodZkfxsLpah7Bn64zAQr/xDh2KcWbtDrsvTdjNMPY7oW20VoqDs98mA6xAw9RNMI+xotNmivdWdv3BEYj9JyH61euTBQ27HC4LsOPuCOFKBqOwGXiJhpzvJZbNCcvQEztem3kqQFAPLg+4wBInyxnY2i31QX7+2IJs0a4pYTWRSRcrvwBAvi2GlXGltrZ7V6KOLzwBrXLD7XiO3C5kO5fcpanKlm/RdVAxUTjUq159H+v9om8HAgX/pIpYBpPnRrG7setNQVzDNQsxfR/YC0h+f9LWnnaBV6+51IjbaqAPSSf6KYv0AKO5XNlJsSTXNRBZaRvrfr0qllgXU82f9y8Eb0sgjL71wD9Fv24fV0toFW8PH3yOeePC6d7kNqZkFdSBksChzqagZwPudYnVhMmhMYV7k1v831H8WHdGPVRe9Z3BDnSCzf8o8fRS3mSEAJBiT30bXlGWUNopIpsgw==
user1@ahc-web-t-001 | ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQCbx+wTVEcdy2Uu2iB+u6+R8Q0yH9ws92GM6K/XXmAXoUuXylkdJzw9vUeuaZTmGxwGRdp+lLh+vVDmiuzrUPjbkFA7Y1SxfR5lgJu7PviDDZzsFeUo5fqSp6FOC5x75jOjqy6fc68GzOnoxk4WR6EWKWRd+xqdgCTGWiuhfUEl1lw7YN8MUhd1Hi0Ef55ZpH133jCzffWbkLFFInyIwuzG6jaPsobNPRshvg9kUoFwo5WqCx/s8Zk4iVl86yCwoV+pXjiubLylSKF7hb7uDE4Ll8gADOtuXUqmc470yvzSxxI4yaZOFz4Ajo1qZHgscSOxWgb+ZVIOKhGK5ftHPaZ4CCxXuhW5J8L3Aqs0WQeRu9Goof83V/ruZhzgg1vnhmC2511QSS2dL6U7n2JNLtNnXNjeSQ0BGVlY1FuZRczmAxN9nJETmRCdUfiTwKdPS4LdfAwrnckPHKtk1QoFKietLwfbmipU+pGvt6qKpKeRfZ/XGbG+ZiQ7oPiqcYU/eh54IAUxxo9CvVHtn742A4ABqK5+0MJP5VuY3fcDU8dIvA0r4LpxRpG/KSB4yZMUhjf+KR7QUpN3mJIDOKTDAxpGOqpNoD2gTYGpyT13AdrRROpOjOJZJqDiVi6m6r/U+sIgqymsxDqBur5+n4VxvvXbdNd+6vz7AI12WA8I8+0xZw==
user1@ahc-web-t-002 | NULL
web-t-002 | private key | ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQCbx+wTVEcdy2Uu2iB+u6+R8Q0yH9ws92GM6K/XXmAXoUuXylkdJzw9vUeuaZTmGxwGRdp+lLh+vVDmiuzrUPjbkFA7Y1SxfR5lgJu7PviDDZzsFeUo5fqSp6FOC5x75jOjqy6fc68GzOnoxk4WR6EWKWRd+xqdgCTGWiuhfUEl1lw7YN8MUhd1Hi0Ef55ZpH133jCzffWbkLFFInyIwuzG6jaPsobNPRshvg9kUoFwo5WqCx/s8Zk4iVl86yCwoV+pXjiubLylSKF7hb7uDE4Ll8gADOtuXUqmc470yvzSxxI4yaZOFz4Ajo1qZHgscSOxWgb+ZVIOKhGK5ftHPaZ4CCxXuhW5J8L3Aqs0WQeRu9Goof83V/ruZhzgg1vnhmC2511QSS2dL6U7n2JNLtNnXNjeSQ0BGVlY1FuZRczmAxN9nJETmRCdUfiTwKdPS4LdfAwrnckPHKtk1QoFKietLwfbmipU+pGvt6qKpKeRfZ/XGbG+ZiQ7oPiqcYU/eh54IAUxxo9CvVHtn742A4ABqK5+0MJP5VuY3fcDU8dIvA0r4LpxRpG/KSB4yZMUhjf+KR7QUpN3mJIDOKTDAxpGOqpNoD2gTYGpyT13AdrRROpOjOJZJqDiVi6m6r/U+sIgqymsxDqBur5+n4VxvvXbdNd+6vz7AI12WA8I8+0xZw==
user1@ahc-web-t-002 | NULL |
|1|V6uYGlSiYXpzFAly9RQHybzl07o=|VUkDfRcKGyUgLdJn+iw6RJE+r68= ssh-ed25519
AAAAC3NzaC1lZDI1NTE5AAAAILpbPHA1jL0MHzBI8qb2X0mHDx3UlrKCdbz1IspvaJW9
|1|dshOpqJI2zQxEpj1pleDmtkijIY=|ZYV8bCeLNDdyE7STDPaO2TzYUEQ= ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQDnGgQUmsCG23b6iYxRHq5MU9xd8Q/p8j3EyZn9hvs4IsBoCgeNXjyXK28x7Mt7tmfjrF/4jLcq4o2TTAwF6eVQZ4KXoBa73dYqYDmYTVKTwzZL9CsJTWHTsSnU8V/J3Tml+hIFrjZzWP34+lL9xyOVin5R0PT/OCG49ecb5tt2FxTZeyWI47B/bCDGXV9g1tjZ8+mnbLXpIdQ9+6GllRZrEGvXWm6z/U3YHO84dcG0IZJ7QsEaAiLSBC/t83So4MDQgdttm+aHZXds4jej5E3QwUex8JkVVn0X7Nr4yKMDkSk7ABD6AFhpa4ESXysqI33CUaSBROAuu4lmfOkLmyRZK2vQ6soiOW8iBgCEl/q8MSOEpZeAi3faYbUnOpLzLDBcCoAuSDoexrTixxlhJmRDeS3PlcXmzvkJl7RRKUYZZcPQOd2w9ipCIAD1PevNlnmmZcfkRe0RRvAyF1mqcO/x5Ovtq9QLbycFHYfh/3LcPuDOWBtT+mVd5FeNUMsZ6+8=
|1|HJd+aDFM66x8jJT1zUZV59ceL10=|PfHQu/Yg35QPBKk7FvNO/46b76o= ecdsa-
sha2-nistp256
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP+XwUozGye03WJ6zC7yoJQaYF8HiUQKmZwnQO0wSxMm9x9nBdPEx1bmyZHHUbMnwQnoeAMmd6hgK6H8hbxzEas=
OK, so now we have set it up so that `user1` on `web-t-002` can access
`web-t-001` as `user1`. A reasonable question to ask at this point might be,
can I go the other way without any extra steps?
Let's try it and see!
From `web-t-001` as `user1` lets run
ssh user1@web-t-002
And see what happens
We get the same warning message we got before, and if we enter `yes` we then
see
user1@yyy.yyy.yyy.yyy: Permission denied (publickey).
What's going on?
We haven't added the public key for `user1` from `web-t-002` to the
`authorized_keys` file on `web-t-001`.
Let's do that now.
Similar to before we'll get the content of the id_rsa.pub from `web-t-001` for
`user1` and copy it to the `authorized_keys` file on `web-t-002` for `user1`.
When we do this we are now able to connect to `web-t-002` from `web-t-001` as
`user1`
OK, SSH has 4 main files involved:
* `id_rsa.pub` (public key): The SSH key pair is used to authenticate the identity of a user or process that wants to access a remote system using the SSH protocol. The public key is used by both the user and the remote server to encrypt messages.
* `id_rsa` (private key): The private key is secret, known only to the user, and should be encrypted and stored safely
* `known_hosts`: stores the public keys and fingerprints of the hosts accessed by a user
* `authorized_keys`: file containing all of the authorized public keys that have been generated. This is what tells the server that it’s Ok to use the key to allow a connection
## Using with GHA
### ssh-action from AppleBoy
A general way to access your server with GHA (say for CICD) is to use the
[GitHub action from appleboy](https://github.com/appleboy/ssh-action) called
`ssh-action`
There are 3 key components needed to get this to work:
1. host
2. username
3. key
Each of these can/should be put into repository secrets. Setting those up is
outside the scope of this article. For details on how to set repository
secrets, see [this](https://docs.github.com/en/actions/security-guides/using-
secrets-in-github-actions) article.
Using the servers from above we could set up the following secrets
* SSH_HOST: web-t-001 IP Address
* SSH_KEY: the content from /home/user1/.ssh/id_rsa ( from web-t-002 )
* SSH_USERNAME: user1
And then set up an GitHub Action like this
name: Test Workflow
on:
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-22.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.10
with:
host: ${{ secrets.SSH_HOST }}
port: 22
key: ${{ secrets.SSH_KEY }}
username: ${{ secrets.SSH_USERNAME }}
script: |
echo ""This is a test"" > ~/test.txt
Using this set up we've made the docker image that runs the GHA to be
(basically) known as `web-t-001` and it has access as `user1` in the same way
we did in the terminal.
When this action is run it will ssh into `web-t-001` as `user1` and create a
file called `test.txt` in the home directory. The content of that file will be
""This is a test""
1. I'm using these keys so that I can gain access to the server as root ↩︎
",2024-07-13,ssh-keys,"If you want to access a server in a 'passwordless' way, the best approach I
know is to use SSH Keys. This is great, but what does that mean and how do you
set it up?
I'm going to attempt to write out the steps for getting this done.
Let's …
",SSH Keys,https://www.ryancheley.com/2024/07/13/ssh-keys/
ryan,technology,"I’ve been futzing around with SSL on this site since last December. I’ve had
about 4 attempts and it just never seemed to work.
Earlier this evening I was thinking about getting a second
[Linode](https://www.linode.com) just to get a fresh start. I was _this_ close
to getting it when I thought, what the hell, let me try to work it out one
more time.
And this time it actually worked.
I’m not really sure what I did differently, but using this
[site](https://certbot.eff.org/lets-encrypt/ubuntuxenial-apache) seemed to
make all of the difference.
The only other thing I had to do was make a change in the word press settings
(from `http` to `https`) and enable a plugin [Really Simple
SSL](https://really-simple-ssl.com) and it finally worked.
I even got an ‘A’ from SSL Labs!

Again, not really sure why this seemed so hard and took so long.
I guess sometimes you just have to try over and over and over again
",2018-04-07,ssl-finally,"I’ve been futzing around with SSL on this site since last December. I’ve had
about 4 attempts and it just never seemed to work.
Earlier this evening I was thinking about getting a second
[Linode](https://www.linode.com) just to get a fresh start. I was _this_ close
to getting it …
",SSL ... Finally!,https://www.ryancheley.com/2018/04/07/ssl-finally/
ryan,technology,"I have a side project I've been working on for a while now. One thing that
happened overtime is that the styling of the site grew organically. I'm not a
designer, and I didn't have a master set of templates or design principals
guiding the development. I kind of hacked it together and made it look ""nice
enough""
That was until I really starting going from one page to another and realized
that there styling of various pages wasn't just a little off ... but A LOT
off.
As an aside, I'm using [tailwind](https://www.tailwind.com) as my CSS
Framework
I wanted to make some changes to the styling and realized I had two choices:
1. Manually go through each html template (the project is a Django project) and catalog the styles used for each element
OR
1. Try and write a `bash` command to do it for me
Well, before we jump into either choice, let's see how many templates there
are to review!
As I said above, this is a Django project. I keep all of my templates in a
single `templates` directory with each app having it's own sub directory.
I was able to use this one line to count the number of `html` files in the
templates directory (and all of the sub directories as well)
ls -R templates | grep html | wc -l
There are 3 parts to this:
1. `ls -R templates` will list out all of the files recursively list subdirectories encountered in the templates directory
2. `grep html` will make sure to only return those files with `html`
3. `wc -l` uses the word, line, character, and byte count to return the number of lines return from the previous command
In each case one command is piped to the next.
This resulted in 41 `html` files.
OK, I'm not going to want to manually review 41 files. Looks like we'll be
going with option 2, ""Try and write a `bash` command to do it for me""
In the end the `bash` script is actually relatively straight forward. We're
just using `grep` two times. But it's the options on `grep` that change (as
well as the regex used) that are what make the magic happen
The first thing I want to do is find all of the lines that have the string
`class=` in them. Since there are `html` templates, that's a pretty sure fire
way to find all of the places where the styles I am interested in are being
applied
I use a package called `djhtml` to lint my templates, but just in case
something got missed, I want to ignore case when doing my regex, i.e, `class=`
should be found, but so should `cLass=` or `Class=`. In order to get that I
need to have the `i` flag enabled.
Since the `html` files may be in the base directory `templates` or one of the
subdirectories, I need to recursively search, so I include the `r` flag as
well
This gets us
grep -ri ""class="" templates/*
That command will output a whole lines like this:
templates/tasks/steps_lists.html: