author,category,content,published_date,slug,summary,title,url
ryan,management,"In every organization, three critical elements determine success: People,
Processes, and Priorities. While all are essential, their ranking matters
profoundly. Based on my experience across several organizations, I've found
that Processes must come first, followed by People, with Priorities anchored
firmly at the foundation.
This deliberate ordering—Processes at the top, People in the middle, and
Priorities as bedrock—creates the most stable and effective organizational
structure. When Processes guide how People work and how Priorities are
determined, organizations can avoid the chaos of constant priority shifts,
reduce dependency on specific individuals, and create consistent frameworks
for decision-making.
## Defining Terms
Let's define what each of these mean from an organizational perspective:
1. Processes - How to solve the problems
2. People - Who will solve the problems
3. Priorities - The order in which to solve the problems
## Process
In my experience ranking Priorities first leads to lots of changes to
Priorities. This week it's shipping a new feature to make all of the buttons
cornflower blue ... next week it's adding AI to the application. The week
after that it's to mine bitcoin. Priorities shift, and that's OK, but priority
driven organizations seem to not have a true defining north star to help guide
them, which in my experience that leads to chaos.
Ranking People first sounds like a good idea. I mean, who doesn't want to put
People first? I have found however that when People are prioritized first bad
things can happen. Cliques can form. Only Sally can do thing X and they're out
for the next three weeks and no, there isn't any documentation on how to do
that. Management can be lax because that's just Bob being Bob and can lead to
toxic work environments.
I think that putting Process first helps to mitigate, though not outright
eliminate, these concerns.
Processes help to determine how we do thing **X**. If Sally is out, that's OK
because we have a _Process_ and documentation to help us through it. Will we
get it done as quickly as Sally would have gotten it done? No, but we will get
it done before they come back.
Processes also help implement things like Codes of Conduct. Again, that won't
prevent cliques from forming, and no it won't keep Bob from being a jerk, but
it creates a framework to help deal with Bob being a jerk and potentially
removing them from the situation entirely.
Processes can also help with prioritization. Having a Process that helps to
guide HOW you prioritize can be very helpful. This doesn't prevent you from
switching up your Priorities, but it does help to keep you focused on
something long enough to complete it. And when you need to change a priority
it's a lot easier (and healthier) to be able to point to the Process that
drove the deicsion to change versus a statement like, ""I don't know, the CEO
saw something on Bloomberg and now we're doing this.""
Setting up Processes is hard. And in a small environments it can seem like
it's not worth it. For example, asking ""Why do we have a 17 page document that
talks about how Priorities are chosen if it's just a handful of People?"" Yes,
that IS hard. And it might not seem like it's worth it. But you don't need a
big long document to determine a Process on how to change Priorities. It can
be as simple as
> We are small and acknowledge that change is required. We will only change
> when a consensus of 60% of the team agree with the change OR if the CEO and
> CFO agree on the change.
More complicated Processes can come later. But at least now when a change is
needed you know HOW you're going to talk about that change!
## People
What comes second? I find that People should be next. It's the People that are
going to help make everything happen. It's the People that are going to help
get you over the finish line of the projects that are driven by your
Processes. It's People that will work the Processes.
Once you have good Processes and good People, then you can really start to set
Priorities that EVERYONE will understand.
### An Example
My least favorite answer to the question, ""Why do we do it this way?"" is ""I
don't know.""
In my opinion this points to a broken culture. It could be that when you
started you did ask questions, but you were shot down so many time for asking
that you just stopped asking. It could be that you're not very curious and
someone just told you and didn't provide a reason and you just accepted it as
gospel that this is the way that it needs to be done.
The reason why this is a toxic trait is that you can have a situation like
this occur
While working on a report a requester indicated that the margins weren't quite
right and it was VERY important that they be 'just so'. I met with the
requester and asked them about the Process and it went something like this:
[](https://mermaid.live/edit#pako:eNpVkMtqwzAQRX9FzNoOjvxqvCgUSqCLQKGr1OpiGo1iU1kKiozjhvx75aRpk4Vg7pmDdNERNlYSVKC0HTYNOi_MTtWvrjWeLa3rPoRRrfm0h3pKrDXsZUpnrEkFtqqXYWLX9coa3zBltSR3Y63vrTWh-5e8w31TP3lGRjKr2DhtfePswHDA8ZL_7Kkhi2OJrR7j-JFd-
gUyEH1d0W-3QLup0D1eB4zG9Kgv_Py-
MBBBR67DVoYPOQrDmADfUEcCqjBKUthrL0CYU1Cx9_ZtNBuovOspgn4n0dNzi1uH3RXu0LxbexuhOsIBKs7TGedZXmQ8S_IkS8oIxoDLWZHMebFYhFOWGT9F8H2-IJk95ClP87RIF0XCi2IegbP9toFKod7T6Qc7uJk4)
When I drew out the flow and asked the requester why, they said, ""I don't
know, that's just how Tim trained me""
I was fortunate that Tim was still at the company, so I called him and asked
about the Process.
He laughed and said something to the effect of, ""They're still doing that? I
only had that in place because of an issue with a fax machine 8 years ago but
IT fixed it. Why are they still doing it that way?""
""Because that's how they were trained""
🤦🏻♂️
Always understand why you're doing a thing. Always. This points to the need
for Process, and why I place it first. Process matters and it helps to inform
the People what they need to do.
## Priorities
Why are Priorities last? How can something as important as Priorities be last?
I would argue that Priorities should be the bedrock of you organization and
they should be HARD to change. Constantly shifting Priorities leads to
dissatisfaction, and burnout. It can also lead People to wonder if what they
do actually matters. If it's always changing, why should I care about what I'm
working on right now if it's just going to be different later today, tomorrow,
or next week.
The interplay between Processes, People, and Priorities forms the backbone of
any effective organization. By putting Processes first, we create the
infrastructure that enables People to thrive and Priorities to remain stable.
Good Processes provide clarity, continuity, and a framework for decision-
making that transcends individual preferences or momentary urgencies.
When organizations understand that Priorities should be difficult to
change—and that a clear Process should govern how and when they change—they
protect their teams from the whiplash of constant redirection. This stability
doesn't mean rigidity; rather, it ensures that when change does occur, it
happens deliberately, transparently, and with organizational buy-in.
Whether you're leading a startup of five People or managing departments within
a large corporation, begin by examining your Processes. Are they documented?
Do People understand not just what to do, but why? Is there a clear Process
for establishing and modifying Priorities? If you can answer ""yes"" to these
questions, you've laid the groundwork for an organization where People can
contribute meaningfully to Priorities that truly matter.
Remember: Process first, People second, and Priorities as the bedrock. Get
this order right, and you'll build an organization that can handle change
without losing its way.
",2025-03-09,Process-People-and-Priorities,"In every organization, three critical elements determine success: People,
Processes, and Priorities. While all are essential, their ranking matters
profoundly. Based on my experience across several organizations, I've found
that Processes must come first, followed by People, with Priorities anchored
firmly at the foundation.
This deliberate ordering—Processes at the …
","Process, People, and Priorities",https://www.ryancheley.com/2025/03/09/Process-People-and-Priorities/
ryan,musings,"The [Tableau Conference](https://tc19.tableau.com) was held at the Mandalay
Bay Convention Center this year (and will be again next year in 2020). I had
the opportunity to attend (several weeks ago) and decided to write up my
thoughts about it.
This is an introverted newbie’s guide navigating the conference.
The conference started on Tuesday with pre-conference sessions that you had to
register (and pay for). I did not attend those.
Tuesday night there was a big welcome reception that I very nearly bailed on
because of how many people there were, but I decided to give it a shot anyway.
I’m glad I did.
The welcome reception (as well as all of the meals) were held in the data
village (basically the convention show floor) which was a little weird but it
worked.
In the reception they had industry specific areas (healthcare being one of
them). I didn’t know this going in ... I just kind of stumbled into it.
This was the luckiest break I could have had as I sat there there entire night
and met about 10 people. Three of them (Josh, Kerry, and Molly) I spoke to the
most, so much so that we decided that we’d go to the ' Data Night Out’ (the
client party) together.
Being super introverted this was not my jam, but I’m glad I went, and I will
go again next year.
Each day is jam packed full of sessions. I didn’t come across any sessions
that were not worthwhile, although some were better than others.
You do have to register for the session in order to gain admittance to the
room (they scan your badge to make sure you belong) but there seemed to be
stand by room in most of the sessions I attended.
## Keynote events
There are ‘Key Note’ events to kick off each day. They happen in the Mandalay
Bay events center, but there is also an overflow room you can watch them from.
I would recommend going to at least one event in the events center, but as an
introvert the overflow was really more my speed. A room that could sit 500
people with only 50 in it ... yes please!
## Iron Viz
A take on Iron Chef, Iron Viz was a chance for 3 Tableau wizards to showcase
their skills with Tableau and a shared data set. It was really interesting to
see the different ways that the data could be presented and the different
stories that each competitor told for their visualizations.
## Data Night Out
I didn’t do this, mostly because by Thursday I was pretty overwhelmed and just
needed a quite night in. I don’t regret not going, but I think I will make
myself go next year
## Data Culture
I’m going to write more on this once I get my head really wrapped around it,
but suffice it to say, this is something that I think is going to be very
important going forward for the organization I work for.
",2019-12-17,a-beginners-guide-to-tableau-conference-2019-edition,"The [Tableau Conference](https://tc19.tableau.com) was held at the Mandalay
Bay Convention Center this year (and will be again next year in 2020). I had
the opportunity to attend (several weeks ago) and decided to write up my
thoughts about it.
This is an introverted newbie’s guide navigating the conference.
The …
",A beginners guide to Tableau Conference - 2019 edition,https://www.ryancheley.com/2019/12/17/a-beginners-guide-to-tableau-conference-2019-edition/
ryan,musings,"One of the earliest memories of my grandmother is visiting her in 29 Palms 1 2
in her permanent mobile home. I remember sitting on the davenport watching the
Dodgers on a small 13"" COLOR CRT TV. I remember that the game was broadcast on
KTLA5. But what I remember the most is the voice of Vin Scully.
I don't know what who the Dodgers were playing, but I remember how much my
grandmother LOVED to listen to Vin call the game. And it stuck with me. I was
probably about 7 or 8 and I thought baseball was ""boring"". To be fair, I
thought most sports were boring, but especially baseball. Nothing ever
happens! But, I loved my grandmother, and I loved hanging out with her 3 and
so I watched the game with her.
Years later I discovered that yes, I did like baseball, and no, it was not
boring. And since my grandmother was a Dodgers fan, then I would be too. It
was something that connected us. it didn't matter where I lived, or how old I
was, we both loved baseball. We both loved the Dodgers. We both loved to hear
Vin call the game.
My grandmother died in 2007, but something that helped to connect me to her in
the years since was watching the Dodgers. Listening to Vin.
As Vin got older, he still called the home games, but he handed most of the
road games to a new crew. I still loved to Watch Dodgers games, but I loved
watching the games he called a _little_ bit more. At the start of each season
I always kind of wondered, ""is this the last year for Vin?"". And in 2016 the
answer was yes.
I still remember the last game [he called in Dodgers
Stadium](https://www.espn.com/mlb/game/_/gameId/360925119). I remember the
back and forth. I remember the Rockies going up 1 run in the top of the 9th.
And the Dodgers tying it back up in the bottom of the 9th. And I remember when
[Charlie Culberson hit the game winning home run in the bottom of the
10th](https://youtu.be/HayOXW09kl8).
I remember the last game [Vin called in San
Francisco](https://www.ryancheley.com/2016/10/03/vins-last-game/). I remember
the Dodgers lost ... but it was Vin's last game, so I still loved getting the
chance to watch it. And to listen to him call the game.
Vin passed at the age of 94 on Aug 2, 2022. Just as I knew that there would be
a day when Vin retired from calling games, I knew there would be a day when he
wouldn't be with us anymore.
I've been trying process this and figure out _why_ this is hitting me as hard
as it is.
It all comes back to my grandmother. They never met each other (at least I
don't think they did), but in my head they were inextricably connected. Vin
was a connection to my grandmother that I didn't fully realize I had, and with
his passing that connection isn't there anymore. He hasn't called a game in
more than 5 years, but still, knowing that he NEVER will again is hitting a
bit hard for me. And I think it's because it reminds me that my grandma isn't
here to watch the games with me anymore, and that bums me out. She was a cool
lady who always loved the Dodgers ... and Vin.
# WinForVin
1. Yes that 29 Palms, right next to the [LARGEST Marine Corp Base in the WORLD](https://en.wikipedia.org/wiki/Marine_Corps_Air_Ground_Combat_Center_Twentynine_Palms) ↩︎
2. also the 29 Palms that is right next to [Joshua Tree](https://en.wikipedia.org/wiki/Joshua_Tree,_California) home to the [National Park](https://en.wikipedia.org/wiki/Joshua_Tree_National_Park) that is the current catnip of Hipsters ↩︎
3. she always had the [butter scotch hard candies](https://www.candynation.com/butterscotch-candy-buttons) that were my favorite ↩︎
",2022-08-05,a-goodbye-to-vin,"One of the earliest memories of my grandmother is visiting her in 29 Palms 1 2
in her permanent mobile home. I remember sitting on the davenport watching the
Dodgers on a small 13"" COLOR CRT TV. I remember that the game was broadcast on
KTLA5. But what I remember …
",A Goodbye to Vin,https://www.ryancheley.com/2022/08/05/a-goodbye-to-vin/
ryan,musings,"This is mostly for me to write down my notes and thoughts about the book “How
to Win Friends and Influence People.”
I’ve noted below the summary from the end of each section below (so I don’t
forget what they were).
The first three sections seemed to speak to my modern sensibilities the most
(keep in mind this book was published in 1936 ... the version I read was
revised in 1981).
I have the summaries below, for reference, but I wanted to have my own take on
each.
## Fundamental Techniques in Handling People
This seems to be a long way of saying the “Use the **Golden Rule** ” over and
over again. The three points are:
1. Don’t criticize, condemn or complain
2. Give honest and sincere appreciation
3. Arouse in the other person an eager want
## Six ways to make people like you
The ‘rules’ presented here are also useful for making small talk at parties
(or other gatherings). I find that talking about myself with a total stranger
is about the hardest thing I can do. I try to engage with people at parties
and have what I hope are interesting questions to ask should I need to. Stuff
I tend to avoid:
* What do you do for a living?
* Where do you work?
* Sports
* Politics
Stuff I try to focus on:
* How do you know the host / acquaintance we may have in common
* What’s the most interesting problem you’ve solved or are working to solve in the last week
* Have you been on a vacation recently? What was your favorite part about it? (With this one I don’t let people off the hook with, ‘being away from work’ ... I try to find something that they really found enjoyable and interesting
These talking points are usually a pretty good starting point. Sometimes when
I’m introduced to a person and the person introduces them as their job, i.e.
This is Sally Jones, she’s a Doctor at the local Hospital, I’ll use that to
parlay away from something work focused (what kind of doctor are you) to
something more person focused, why did you want to become a doctor? Where did
you go to Medical School? Did you know you always wanted to be a doctor? I try
to focus on getting to know them better and have them talk about themselves.
The tips from the book support my intuition when meeting new people. They are:
1. Become genuinely interested in other people
2. Smile
3. Remember that a person’s name is to that person the sweetest and most important sound in any language
4. Be a good listener. Encourage to talk about themselves
5. Talk in terms of the other person’s interest
6. Make the other person feel important - and do it sincerely
## How to Win People to your way of thinking
This section provided the most useful and helpful information (for me
anyway!). It really leads to how to have better influence (than winning
friends).
One of the problems I’ve suffered from throughout my life is the **need** to
be right about a thing. This section has concrete tips and examples of how to
not be the smartest person in the room, but working on being the most
influential person in the room.
My favorite is the first one, which I’ll paraphrase to be “The only way to win
an argument is to avoid it!” I’d never thought about trying to avoid
arguments, only how to win them once I was in them. The idea reminds me a bit
of [War Games](https://en.m.wikipedia.org/wiki/WarGames ""War Game with Matthew
Broderick \(1984\)""). At the end, Joshua, the super computer that is trying to
figure out how to win a Nuclear War with the USSR, concedes that the only way
to win is to not play at all. Just like an argument.
The other piece that really struck me was get the other person to say ‘Yes’.
This is kind of sales-y and could be smarmy if used with a subtext of
insincerity, but I think that the examples given in the book, and using it in
the context of trying to win friends AND influence people it can go a long
way.
The tips from this section of the book are:
1. The only way to get the best of an argument is to avoid it
2. Show respect for the other person’s opinions. Never say “You’re wrong”
3. If you are wrong, admit it quickly and emphatically
4. Begin in a friendly way
5. Get the other person saying “yes, yes” immediately
6. Let the other person do a great deal of the talking
7. Let the other person feel that the idea is his or hers
8. Try honestly to see things from the other persons perspective
9. BE sympathetic with the other persons ideas and desires
10. Appeal to the nobler motives
11. Dramatize your ideas
12. throw down a challenge
## Be a Leader: How to change people without giving offense or arousing
resentment
This section has the best points, but the stories were _very_ contrived.
Again, this goes to how to win influence more than winning friends. Some of
the items are a bit too 1930s for my taste (numbers 2, 3, and 6 in particular
seem overly outdated). But overall, they are good ideas to work towards.
The tips are:
1. Begin with praise and honest appreciation
2. Call attention to the person’s mistake indirectly
3. Talk about your own mistakes before criticizing the other person
4. Ask questions instead of giving direct orders
5. Let the other person save face
6. Praise the slightest improvement and praise every improvement. Be “hearty in your approbation and lavish in your praise”
7. Give the other person a fine reputation to live up to
8. Use encouragement. make the fault seem easy to correct
9. Make the other person gabby about doing the thing you suggest
Overall I’m really glad that I read this book and glad that my
[CHIME](https://chimecentral.org) mentor [Tim
Gibbs](https://www.linkedin.com/in/srtim/) recommended it to me.
I’ve been actively working to include these ideas into my work and home life
and have found some surprising benefits. It’s also helping to make me a little
less stressed out.
If you’re looking for a bit of help in trying to be a better influencer in
your organization, or your personal life, [this
book](https://www.amazon.com/How-Win-Friends-Influence-
People/dp/1439167346/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1527122851&sr=8-1
""How to Win Friends and Influence People"") is well worth the read.
",2018-05-23,a-summary-of-dale-carnegies-how-to-win-friends-and-influence-people,"This is mostly for me to write down my notes and thoughts about the book “How
to Win Friends and Influence People.”
I’ve noted below the summary from the end of each section below (so I don’t
forget what they were).
The first three sections seemed to speak …
",A Summary of Dale Carnegie’s “How to Win Friends and Influence People”,https://www.ryancheley.com/2018/05/23/a-summary-of-dale-carnegies-how-to-win-friends-and-influence-people/
Ryan Cheley,pages,"I'm Ryan Cheley and this is my site. I've got various places on the internet
you can find me, like [GitHub](https://github.com/ryancheley),
[Mastodon](https://mastodon.social/@ryancheley), and [here](/)!
I like writing [Python](https://www.python.org), and when developing web
stuff, I like to use [Django](https://www.djangoproject.com). A couple of
Django projects I've done can be found
[here](https://stadiatracker.com/Pages/home) and
[here](https://doestatisjrhaveanerrortoday.com).
The source code for
[DoesTatisJrHaveAnErrorToday.com](https://doestatisjrhaveanerrortoday.com) can
be found [here](https://github.com/ryancheley/tatis).
If you're really interested, you can find my CV [here](/cv/).
",2025-04-02,about,"I'm Ryan Cheley and this is my site. I've got various places on the internet
you can find me, like [GitHub](https://github.com/ryancheley),
[Mastodon](https://mastodon.social/@ryancheley), and [here](/)!
I like writing [Python](https://www.python.org), and when developing web
stuff, I like to use [Django](https://www.djangoproject.com). A couple of
Django projects I've done can be found
[here](https://stadiatracker.com/Pages/home) and …
",About,https://www.ryancheley.com/pages/about/
ryan,technology,"Over the long holiday weekend I had the opportunity to play around a bit with
some of my Raspberry Pi scripts and try to do some fine tuning.
I mostly failed in getting anything to run better, but I did discover that not
having my code in version control was a bad idea. (Duh)
I spent the better part of an hour trying to find a script that I had
accidentally deleted somewhere in my blog. Turns out it was (mostly) there,
but it didn’t ‘feel’ right … though I’m not sure why.
I was able to restore the file from my blog archive, but I decided that was a
dumb way to live and given that
1. I use version control at work (and have for the last 15 years)
2. I’ve used it for other personal projects
However, I’ve only ever used a GUI version of either subversion (at work) or
GitHub (for personal projects via PyCharm). I’ve never used it from the
command line.
And so, with a bit of time on my hands I dove in to see what needed to be
done.
Turns out, not much. I used this
[GitHub](https://help.github.com/articles/adding-an-existing-project-to-
github-using-the-command-line/) resource to get me what I needed. Only a
couple of commands and I was in business.
The problem is that I have a terrible memory and this isn’t something I’m
going to do very often. So, I decided to write a bash script to encapsulate
all of the commands and help me out a bit.
The script looks like this:
echo ""Enter your commit message:""
read commit_msg
git commit -m ""$commit_msg""
git remote add origin path/to/repository
git remote -v
git push -u origin master
git add $1
echo ”enter your commit message:”
read commit_msg
git commit -m ”$commit_msg”
git push
I just recently learned about user input in bash scripts and was really
excited about the opportunity to be able to use it. Turns out it didn’t take
long to try it out! (God I love learning things!)
What the script does is commits the files that have been changed (all of
them), adds it to the origin on the GitHub repo that has been specified,
prints verbose logging to the screen (so I can tell what I’ve messed up if it
happens) and then pushes the changes to the master.
This script doesn’t allow you to specify what files to commit, nor does it
allow for branching and tagging … but I don’t need those (yet).
I added this script to 3 of my projects, each of which can be found in the
following GitHub Repos:
* [rpicamera-hummingbird](https://github.com/ryancheley/rpicamera-hummingbird)
* [rpi-dodgers](https://github.com/ryancheley/rpi-dodgers)
* [rpi-kings](https://github.com/ryancheley/rpi-kings)
I had to make the commit.sh executable (with `chmod +x commit.sh`) but other
than that it’s basically plug and play.
## Addendum
I made a change to my Kings script tonight (Nov 27) and it wouldn’t get pushed
to git. After a bit of Googling and playing around, I determined that the
original script would only push changes to an empty repo ... not one with
stuff, like I had. Changes made to the post (and the GitHub repo!)
",2018-11-25,adding-my-raspberry-pi-project-code-to-github,"Over the long holiday weekend I had the opportunity to play around a bit with
some of my Raspberry Pi scripts and try to do some fine tuning.
I mostly failed in getting anything to run better, but I did discover that not
having my code in version control was …
",Adding my Raspberry Pi Project code to GitHub,https://www.ryancheley.com/2018/11/25/adding-my-raspberry-pi-project-code-to-github/
ryan,technology,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to
[Pelican](https://getpelican.com). I did this for a couple of reasons (see my
post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/)), but one thing that I was a bit worried about when I migrated was
that Pelican's offering for site search didn't look promising.
There was an outdated plugin called [tipue-search](https://github.com/pelican-
plugins/tipue-search) but when I was looking at it I could tell it was on it's
last legs.
I thought about it, and since my blag isn't super high trafficked AND you can
use google to search a specific site, I could wait a bit and see what options
came up.
After waiting a few months, I decided it would be interesting to see if I
could write a SQLite utility to get the data from my blog, add it to a SQLite
database and then use [datasette](https://datasette.io) to serve it up.
I wrote the beginning scaffolding for it last August in a utility called
[pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/0.1/), but I
ran into several technical issues I just couldn't overcome. I thought about
giving up, but sometimes you just need to take a step away from a thing,
right?
After the first of the year I decided to revisit my idea, but first looked to
see if there was anything new for Pelican search. I found a tool plugin called
[search](https://github.com/pelican-plugins/search) that was released last
November and is actively being developed, but as I read through the
documentation there was just **A LOT** of stuff:
* stork
* requirements for the structure of your page html
* static asset hosting
* deployment requires updating your `nginx` settings
These all looked a bit scary to me, and since I've done some work using
[datasette](https://datasette.io) I thought I'd revisit my initial idea.
## My First Attempt
As I mentioned above, I wrote the beginning scaffolding late last summer. In
my first attempt I tried to use a few tools to read the `md` files and parse
their `yaml` structure and it just didn't work out. I also realized that
`Pelican` can have [reStructured Text](https://www.sphinx-
doc.org/en/master/usage/restructuredtext/basics.html) and that any attempt to
parse just the `md` file would never work for those file types.
## My Second Attempt
### The Plugin
During the holiday I thought a bit about approaching the problem from a
different perspective. My initial idea was to try and write a `datasette`
style package to read the data from `pelican`. I decided instead to see if I
could write a `pelican` plugin to get the data and then add it to a SQLite
database. It turns out, I can, and it's not that hard.
Pelican uses `signals` to make plugin in creation a pretty easy thing. I read
a [post](https://blog.geographer.fr/pelican-plugins) and the
[documentation](https://docs.getpelican.com/en/latest/plugins.html) and was
able to start my effort to refactor `pelican-to-sqlite`.
From [The missing Pelican plugins guide](https://blog.geographer.fr/pelican-
plugins) I saw lots of different options, but realized that the signal
`article_generator_write_article` is what I needed to get the article content
that I needed.
I then also used `sqlite_utils` to insert the data into a database table.
def save_items(record: dict, table: str, db: sqlite_utils.Database) -> None: # pragma: no cover
db[table].insert(record, pk=""slug"", alter=True, replace=True)
Below is the method I wrote to take the content and turn it into a dictionary
which can be used in the `save_items` method above.
def create_record(content) -> dict:
record = {}
author = content.author.name
category = content.category.name
post_content = html2text.html2text(content.content)
published_date = content.date.strftime(""%Y-%m-%d"")
slug = content.slug
summary = html2text.html2text(content.summary)
title = content.title
url = ""https://www.ryancheley.com/"" + content.url
status = content.status
if status == ""published"":
record = {
""author"": author,
""category"": category,
""content"": post_content,
""published_date"": published_date,
""slug"": slug,
""summary"": summary,
""title"": title,
""url"": url,
}
return record
Putting these together I get a method used by the Pelican Plugin system that
will generate the data I need for the site AND insert it into a SQLite
database
def run(_, content):
record = create_record(content)
save_items(record, ""content"", db)
def register():
signals.article_generator_write_article.connect(run)
### The html template update
I use a custom implementation of [Smashing
Magazine](https://www.smashingmagazine.com/2009/08/designing-a-html-5-layout-
from-scratch/). This allows me to do some edits, though I mostly keep it
pretty stock. However, this allowed me to make a small edit to the `base.html`
template to include a search form.
In order to add the search form I added the following code to `base.html`
below the `nav` tag:
### Putting it all together with datasette and Vercel
Here's where the **magic** starts. Publishing data to Vercel with `datasette`
is extremely easy with the `datasette` plugin [`datasette-publish-
vercel`](https://pypi.org/project/datasette-publish-vercel/).
You do need to have the [Vercel cli installed](https://vercel.com/cli), but
once you do, the steps for publishing your SQLite database is really well
explained in the `datasette-publish-vercel`
[documentation](https://github.com/simonw/datasette-publish-
vercel/blob/main/README.md).
One final step to do was to add a `MAKE` command so I could just type a quick
command which would create my content, generate the SQLite database AND
publish the SQLite database to Vercel. I added the below to my `Makefile`:
vercel:
{ \
echo ""Generate content and database""; \
make html; \
echo ""Content generation complete""; \
echo ""Publish data to vercel""; \
datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \
echo ""Publishing complete""; \
}
The line
datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \
has an extra flag passed to it (`--metadata`) which allows me to use
`metadata.json` to create a saved query which I call `article_search`. The
contents of that saved query are:
select summary as 'Summary', url as 'URL', published_date as 'Published Data' from content where content like '%' || :text || '%' order by published_date
This is what allows the `action` in the `form` above to have a URL to link to
in `datasette` and return data!
With just a few tweaks I'm able to include a search tool, powered by datasette
for my pelican blog. Needless to say, I'm pretty pumped.
## Next Steps
There are still a few things to do:
1. separate search form html file (for my site)
2. formatting `datasette` to match site (for my vercel powered instance of `datasette`)
3. update the README for `pelican-to-sqlite` package to better explain how to fully implement
4. Get `pelican-to-sqlite` added to the [pelican-plugins page](https://github.com/pelican-plugins/)
",2022-01-16,adding-search-to-my-pelican-blog-with-datasette,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to
[Pelican](https://getpelican.com). I did this for a couple of reasons (see my
post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-
wordpress/)), but one thing that I was a bit worried about when I migrated was
that Pelican's offering for site search didn't look promising.
There was an outdated plugin …
",Adding Search to My Pelican Blog with Datasette,https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/
ryan,microblog,"The AHL All Star Challenge was tonight and it was some of the most fun I've
had at Acrisure since it opened in late 2022. Most All Star style competitions
are pretty unserious, and can be, in my opinion, kind of boring as well. I
mean, I LOVE baseball, but watching the All Star game is not for me. And don't
get me started on the Home Run Derby. Snooze fest for me.
The AHL All Star completion though was something else! A Skills day yesterday,
but then the actual challenge today. Representatives from each division play
in a 3-on-3 style, in 2 5-minute periods. If tied at the end, the tie is
broken with a shootout. The top two teams with the most wins face each other
in the Championship game.
The Championship game is a little different in that it's a 6 minute single
period game. Again, if there is a tie at the end you have a shootout.
This means that you get to watch 7 'mini' games in about 2 1/2 hours. It's
pretty intense.
The Firebirds were the host team this year, but we only had one All Star,
[Cale Fleury](https://theahl.com/stats/player/7382/86/cale-fleury). He was
called up to the Kraken, so a replacement, [Jani
Nyman](https://theahl.com/stats/player/10127/86/jani-nyman) was made. Even
though the Firebirds have a really good record (24-15-1-5), they only had 1
player on the All Star Game because the Pacific Division has 10 teams (read my
thoughts on that [here](https://www.ryancheley.com/2024/02/24/realign-the-
ahl/)).
Anyway, the competition was pretty amazing tonight, and I'm really glad I got
to go. I'm kind of hoping to be able to go next year when it's in
[Rockford](https://icehogs.com/news/rockford-icehogs-to-host-2026-ahl-all-
star-classic).
",2025-02-03,ahl-all-star-challenge,"The AHL All Star Challenge was tonight and it was some of the most fun I've
had at Acrisure since it opened in late 2022. Most All Star style competitions
are pretty unserious, and can be, in my opinion, kind of boring as well. I
mean, I LOVE baseball, but …
",AHL All Star Challenge,https://www.ryancheley.com/2025/02/03/ahl-all-star-challenge/
ryan,microblog,"Since the All-Star break the Firebirds entered what is arguably their softest
part of their schedule with games against San Diego, Henderson, San Diego
again, Bakersfield, and Tucson. These 4 teams are in the bottom of the Pacific
division and in San Diego's case they are 20+ points behind the Firebirds.
I'm not sure what the hell is going on, but in their first game in San Diego
they won in Over time in what should have been a blow out, in their second
game in Henderson they lost by 1 goal.
In their first home game post All Star break they again played San Diego and
lost 5-3 (the last goal being an empty netter so 🤷🏼) but they also gave up 2
goals in less than 40 seconds in the second period. That ended up really being
the different.
That means 3 games into their 5 game 'soft' patch and they're 1-2. They play
Bakersfield tomorrow night and I sure hope they find a way to get back into
their winning ways because this has been some pretty shitty hockey to watch.
[The Firebirds are 2-3 against the Condors this season](https://ahl-
data.ryancheley.com/games?sql=select+*%0D%0Afrom%0D%0A++games+g%0D%0Ainner+join+dim_date+d+on+g.game_date+%3D+d.date%0D%0A++where+d.season+%3D+%272024-25%27%0D%0A++and+%28%0D%0A++%28home_team%3D%27Coachella+Valley+Firebirds%27+and+away_team+%3D+%27Bakersfield+Condors%27%29%0D%0A++or+%0D%0A++away_team%3D%27Coachella+Valley+Firebirds%27+and+home_team+%3D+%27Bakersfield+Condors%27%0D%0A++%29%0D%0Aorder+by%0D%0A++g.game_id%0D%0A)
and have yet to beat the Condors at home this season.
To quote Han Solo, ""I have a bad feeling about this""
",2025-02-15,all-star-break-doldrums,"Since the All-Star break the Firebirds entered what is arguably their softest
part of their schedule with games against San Diego, Henderson, San Diego
again, Bakersfield, and Tucson. These 4 teams are in the bottom of the Pacific
division and in San Diego's case they are 20+ points behind the …
",all-star-break-doldrums,https://www.ryancheley.com/2025/02/15/all-star-break-doldrums/
ryan,musings,"About a month ago I discovered a kitschy band that did covers of current pop
songs but re-imagined as Gatsbyesque versions. I was instantly in love with
the new arrangements of these songs that I knew and the videos that they
posted on [YouTube](https://www.youtube.com/user/ScottBradleeLovesYa). I loved
it so much that I’ve been listening to them in Apple Music for a couple of
weeks as well (time permitting).
I mentioned to Emily this new band that I found and she told me that they
would be playing at the [McCallum Theatre](http://www.mccallumtheatre.com) and
I was in utter disbelief. We bought tickets that night (DD 113 and 114 ...
some of the best in the house!) and we were all set.
To say that I’ve been looking forward to this concert is an understatement.
For all the awesomeness that the YouTube videos have, I **knew** that a live
performance would be a major event and I was not disappointed.
I think this is a concert that anyone could enjoy and that everyone should
see. This was the first concert where I was both glad to be there AND glad
that I had gone (usually I’m just glad that I have gone and have a hard time
enjoying the moment while I’m there).
I have the set list below, mostly so I don’t forget what songs were played.
It’s also really cool because some of the performers at the concert were the
ones in the YouTube videos. Miche (pronounced Mickey) Braden was an amazingly
soulful singer, and her part of ‘All about that Bass’ was on point and breath
taking!
It was such an awesome concert. I can’t wait to see them again!
## First Set
[Thriller](https://youtu.be/td-_pUPVjdo)
[Sweet child o mine](https://youtu.be/kJ3BAF_15yQ)
[Just Like Heaven](https://youtu.be/Fjd1seT1mMQ)
[Are you going to be my girl](https://youtu.be/Cdo0lfWoqws)
[Africa](https://youtu.be/IUlRavyDP6o)
[Lean on](https://youtu.be/nzFJNsij38c)
[All about that bass](https://youtu.be/G-N3alxKyjE)
## Second Set
[Umbrella](https://youtu.be/OBmlCZTF4Xs)
[Story of my life](https://youtu.be/FASi9lrUoYM)
[Since you been gone](https://youtu.be/lhod-UI40C0)
[Crazy - Gnarls Barkley](https://youtu.be/FyFwko9O2UE)
[Heart of glass](https://youtu.be/DTMoipsvGNc)
[Habits - Tove Lo](https://youtu.be/7hHZnvjCbVw)
[Time after time](https://youtu.be/yKcPEtKu7CM)
## Encore
[Stacy's mom](https://youtu.be/T2kOj-GFN8k)
[Creep - Radiohead](https://youtu.be/m3lF2qEA2cw)
[Such Great Heights](https://youtu.be/tti76BnCL98)
## Band
Hannah Gill - vocals
Demi Remick - Tap
Miche Braden - vocals
Natalie Angst - vocals
Casey Abrams - MC / vocals
Ryan Quinn - Vocals
Ben the Sax Guy - Sax and clarinet
Dave Tedeschi - drums
Steve Whipple - bass
Logan Evan Thomas - Piano
The trombone player was amazing, but I wasn’t able to find him on the [PMJ
Performers page](http://postmodernjukebox.com/performers/).
",2018-12-15,an-evening-with-post-modern-jukebox,"About a month ago I discovered a kitschy band that did covers of current pop
songs but re-imagined as Gatsbyesque versions. I was instantly in love with
the new arrangements of these songs that I knew and the videos that they
posted on [YouTube](https://www.youtube.com/user/ScottBradleeLovesYa). I loved
it so much that …
",An Evening with Post Modern Jukebox,https://www.ryancheley.com/2018/12/15/an-evening-with-post-modern-jukebox/
ryan,musings,"The thing about HIMSS is that there are a lot of people. I mean ... a lot of
people. More than 43k people will attend as speakers, exhibitors or attendees.
Let that sink in for a second.
No. Really. Let. That. Sink. In.
That’s more than the average [attendance of a MLB game](https://www.baseball-
reference.com/leagues/MLB/2017-misc.shtml ""Average attendance"") of 29 teams.
It’s ridiculous.
As an introvert you know what will drain you and what will invigorate you. For
me I need to be cautious of conferencing too hard. That is, I need to be aware
of myself, my surroundings and my energy levels.
My tips are:
1. Have a great playlist on your smart phone. I use an iPhone and get a subscription to Apple Music just for the conference. This allows me to have a killer set of music that helps to drown out the cacophony of people.
2. Know when you’ve reached your limit. Even with some sweet tunes it’s easy to get drained. When you’re done you’re done. Don’t be a hero.
3. Try to make at least one meaningful connection. I know, it’s hard. But it’s totally worth it. Other introverts are easy to spot because they’re the people on their smart phones pretending to write a blog post while listening to their sweet playlist. But if you can start a conversation, not small talk, it will be worth it. Attend a networking function that’s applicable to you and you’ll be able to find at least one or two people to connect with.
The other tips for surviving HIMSS are the same for any other conference:
1. Don’t worry about how you’re dressed ... you will **always** be underdressed when compared to Hospital Administrators ... you’re in ‘IT’ and you dress like it
2. Wear good walking shoes (see number 2 about being under dressed)
3. Drink plenty of water
4. Wash your hands and/or have hand sanitizer
5. Accept free food when it’s offered
Ok. One day down. 3+ more to go!
",2018-03-06,an-introverts-guide-to-large-conferences-or-how-i-survived-himss-2018-and-2017-and-2016,"The thing about HIMSS is that there are a lot of people. I mean ... a lot of
people. More than 43k people will attend as speakers, exhibitors or attendees.
Let that sink in for a second.
No. Really. Let. That. Sink. In.
That’s more than the average [attendance of …](https://www.baseball-
reference.com/leagues/MLB/2017-misc.shtml ""Average attendance"")
",An Introvert’s guide to large conferences ... or how I survived HIMSS 2018 (and 2017 and 2016),https://www.ryancheley.com/2018/03/06/an-introverts-guide-to-large-conferences-or-how-i-survived-himss-2018-and-2017-and-2016/
ryan,technology,"Nothing can ever really be considered **done** when you're talking about
programming, right?
I decided to try and add images to the [python script I wrote last
week](https://github.com/miloardot/python-
files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it,
with not too much hassel.
The first thing I decided to do was to update the code on `pythonista` on my
iPad Pro and verify that it would run.
It took some doing (mostly because I _forgot_ that the attributes in an `img`
tag included what I needed ... initially I was trying to programmatically get
the name of the person from the image file itelf using [regular
expressions](https://en.wikipedia.org/wiki/Regular_expression) ... it didn't
work out well).
Once that was done I branched the `master` on GitHub into a `development`
branch and copied the changes there. Once that was done I performed a **pull
request** on the macOS GitHub Desktop Application.
Finally, I used the macOS GitHub app to merge my **pull request** from
`development` into `master` and now have the changes.
The updated script will now also get the image data to display into the multi
markdown table:
| Name | Title | Image |
| --- | --- | --- |
|Mike Cheley|CEO/Creative Director||
|Ozzy|Official Greeter||
|Jay Sant|Vice President||
|Shawn Isaac|Vice President||
|Jason Gurzi|SEM Specialist||
|Yvonne Valles|Director of First Impressions||
|Ed Lowell|Senior Designer||
|Paul Hasas|User Interface Designer||
|Alan Schmidt|Senior Web Developer||
Which gets displayed as this:
Name Title Image
* * *
Mike Cheley CEO/Creative Director  Ozzy Official
Greeter  Jay
Sant Vice President  Shawn Isaac Vice
President  Jason Gurzi
SEM Specialist  Yvonne Valles
Director of First Impressions  Ed Lowell
Senior Designer  Paul Hasas User
Interface Designer  Alan Schmidt
Senior Web Developer 
",2016-10-22,an-update-to-my-first-python-script,"Nothing can ever really be considered **done** when you're talking about
programming, right?
I decided to try and add images to the [python script I wrote last
week](https://github.com/miloardot/python-
files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it,
with not too much hassel.
The first thing I decided to do was to update the …
",An Update to my first Python Script,https://www.ryancheley.com/2016/10/22/an-update-to-my-first-python-script/
ryan,productivity,"In my first post of this series I outlined the steps needed in order for me to
post. They are:
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to deploy the SQLite database to vercel
3. [Run `git add ` to add post to be committed to GitHub](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/)
4. [Run `git commit -m ` to commit to GitHub](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-message)
5. [Post to Twitter with a link to my new post](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/)
In this post I'll be focusing on how I automated step 4, Run `git commit -m
` to commit to GitHub.
# Automating the ""git commit ..."" part of my workflow
In order for my GitHub Action to auto post to Twitter, my commit message needs
to be in the form of ""New Post: ..."". What I'm looking for is to be able to
have the commit message be something like this:
> New Post: Great New Post https://ryancheley.com/yyyy/mm/dd/great-new-post/
This is basically just three parts from the markdown file, the `Title`, the
`Date`, and the `Slug`.
In order to get those details, I need to review the structure of the markdown
file. For Pelican writing in markdown my file is structured like this:
Title:
Date:
Tags:
Slug:
Series:
Authors:
Status:
My words start here and go on for a bit.
In [the last post](https://www.ryancheley.com/2022/01/28/auto-generating-the-
commit-message) I wrote about how to `git add` the files in the content
directory. Here, I want to take the file that was added to `git` and get the
first 7 rows, i.e. the details from `Title` to `Status`.
The file that was updated that needs to be added to git can be identified by
running
find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g' | xargs git add
Running `git status` now will display which file was added with the last
command and you'll see something like this:
❯ git status
On branch main
Untracked files:
(use ""git add ..."" to include in what will be committed)
content/productivity/auto-generating-the-commit-message.md
What I need though is a more easily parsable output. Enter the `porcelin` flag
which, per the docs
> Give the output in an easy-to-parse format for scripts. This is similar to
> the short output, but will remain stable across Git versions and regardless
> of user configuration. See below for details.
which is exactly what I needed.
Running `git status --porcelain` you get this:
❯ git status --porcelain
?? content/productivity/more-writing-automation.md
Now, I just need to get the file path and exclude the status (the `??` above
in this case2), which I can by piping in the results and using `sed`
❯ git status --porcelain | sed s/^...//
content/productivity/more-writing-automation.md
The `sed` portion says
* search the output string starting at the beginning of the line (`^`)
* find the first three characters (`...`). 3
* replace them with nothing (`//`)
There are a couple of lines here that I need to get the content of for my
commit message:
* Title
* Slug
* Date
* Status4
I can use `head` to get the first `n` lines of a file. In this case, I need
the first 7 lines of the output from `git status --porcelain | sed s/^...//`.
To do that, I pipe it to `head`!
git status --porcelain | sed s/^...// | xargs head -7
That command will return this:
Title: Auto Generating the Commit Message
Date: 2022-01-24
Tags: Automation
Slug: auto-generating-the-commit-message
Series: Auto Deploying my Words
Authors: ryan
Status: draft
In order to get the **Title** , I'll pipe this output to `grep` to find the
line with `Title`
git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: '
which will return this
Title: Auto Generating the Commit Message
Now I just need to remove the leading `Title:` and I've got the title I'm
going to need for my Commit message!
git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'
which return just
Auto Generating the Commit Message
I do this for each of the parts I need:
* Title
* Slug
* Date
* Status
Now, this is getting to have a lot of parts, so I'm going to throw it into a
`bash` script file called `tweet.sh`. The contents of the file look like this:
TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'`
SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'`
POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'`
POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'`
You'll see above that the `Date` piece is a little more complicated, but it's
just doing a find and replace on the `-` to update them to `/` for the URL.
Now that I've got all of the pieces I need, it's time to start putting them
together
I define a new variable called `URL` and set it
URL=""https://ryancheley.com/$POST_DATE/$SLUG/""
and the commit message
MESSAGE=""New Post: $TITLE $URL""
Now, all I need to do is wrap this in an `if` statement so the command only
runs when the STATUS is `published`
if [ $POST_STATUS = ""published"" ]
then
MESSAGE=""New Post: $TITLE $URL""
git commit -m ""$MESSAGE""
git push github main
fi
Putting this all together (including the `git add` from my previous post) and
the `tweet.sh` file looks like this:
# Add the post to git
find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g' | xargs git add
# Get the parts needed for the commit message
TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'`
SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'`
POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'`
POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'`
URL=""https://ryancheley.com/$POST_DATE/$SLUG/""
if [ $POST_STATUS = ""published"" ]
then
MESSAGE=""New Post: $TITLE $URL""
git commit -m ""$MESSAGE""
git push github main
fi
When this script is run it will find an updated or added markdown file (i.e.
article) and add it to git. It will then parse the file to get data about the
article. If the article is set to published it will commit the file with a
message and will push to github. Once at GitHub, [the Tweeting action I wrote
about](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/) will
tweet my commit message!
In the next (and last) article, I'm going to throw it all together and to get
a spot when I can run one make command that will do all of this for me.
## Caveats
The script above works, but if you have multiple articles that you're working
on at the same time, it will fail pretty spectacularly. The final version of
the script has guards against that and looks like
[this](https://github.com/ryancheley/ryancheley.com/blob/main/tweet.sh)
1. `make vercel` actually runs `make html` so this isn't really a step that I need to do. ↩︎
2. Other values could just as easily be `M` or `A` ↩︎
3. Why the first three characters, because that's how `porcelain` outputs the `status` ↩︎
4. I will also need the `Status` to do some conditional logic otherwise I may have a post that is in draft status that I want to commit and the GitHub Action will run posting a tweet with an article and URL that don't actually exist yet. ↩︎
",2022-01-28,auto-generating-the-commit-message,"In my first post of this series I outlined the steps needed in order for me to
post. They are:
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to deploy the SQLite database to vercel
3. [Run `git add ` to …](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/)
",Auto Generating the Commit Message,https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-message/
ryan,productivity,"Each time I write something for this site there are several steps that I go
through to make sure that the post makes it's way to where people can see it.
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to deploy the SQLite database to vercel
3. Run `git add ` to add post to be committed to GitHub
4. Run `git commit -m ` to commit to GitHub
5. Post to Twitter with a link to my new post
If there's more than 2 things to do, I'm totally going to forget to do one of
them.
The above steps are all automat-able, but the one I wanted to tackle first was
the automated tweet. Last night I figured out how to tweet with a GitHub
action.
There were a few things to do to get the auto tweet to work:
1. Find a GitHub in the Market Place that did the auto tweet (or try to write one if I couldn't find one)
2. Set up a twitter app with Read and Write privileges
3. Set the necessary secrets for the report (API Key, API Key Secret, Access Token, Access Token Secret, Bearer)
4. Test the GitHub Action
The action I chose was [send-tweet-action](https://github.com/ethomson/send-
tweet-action). It's got easy to read
[documentation](https://github.com/ethomson/send-tweet-
action/blob/main/README.md) on what is needed. Honestly the hardest part was
getting a twitter app set up with Read and Write privileges.
I'm still not sure how to do it, honestly. I was lucky enough that I already
had an app sitting around with Read and Write from the WordPress blog I had
previously, so I just regenerated the keys for that one and used them.
The last bit was just testing the action and seeing that it worked as
expected. It was pretty cool running an action and then seeing a tweet in my
timeline.
The TIL for this was that GitHub Actions can have conditionals. This is
important because I don't want to generate a new tweet each time I commit to
main. I only want that to happen when I have a new post.
To do that, you just need this in the GitHub Action:
if: ""contains(github.event.head_commit.message, '')""
In my case, the `` is `New Post:`.
The `send-tweet-action` has a `status` field which is the text tweeted. I can
use the `github.event.head_commit.message` in the action like this:
${{ github.event.head_commit.message }}
Now when I have a commit message that starts 'New Post:' against `main` I'll
have a tweet get sent out too!
This got me to thinking that I can/should automate all of these steps.
With that in mind, I'm going to work on getting the process down to just
having to run a single command. Something like:
make publish ""New Post: Title of my Post https://www.ryancheley.com/yyyy/mm/dd/slug/""
1. `make vercel` actually runs `make html` so this isn't really a step that I need to do. ↩︎
",2022-01-24,auto-tweeting-new-post,"Each time I write something for this site there are several steps that I go
through to make sure that the post makes it's way to where people can see it.
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to …
",Auto Tweeting New Post,https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/
ryan,technology,"We got everything set up, and now we want to automate the deployment.
Why would we want to do this you ask? Let’s say that you’ve decided that you
need to set up a test version of your site (what some might call UAT) on a new
server (at some point I’ll write something up about about multiple Django
Sites on the same server and part of this will still apply then). How can you
do it?
Well you’ll want to write yourself some scripts!
I have a mix of Python and Shell scripts set up to do this. They are a bit
piece meal, but they also allow me to run specific parts of the process
without having to try and execute a script with ‘commented’ out pieces.
**Python Scripts**
create_server.py
destroy_droplet.py
**Shell Scripts**
copy_for_deploy.sh
create_db.sh
create_server.sh
deploy.sh
deploy_env_variables.sh
install-code.sh
setup-server.sh
setup_nginx.sh
setup_ssl.sh
super.sh
upload-code.sh
The Python script `create_server.py` looks like this:
# create_server.py
import requests
import os
from collections import namedtuple
from operator import attrgetter
from time import sleep
Server = namedtuple('Server', 'created ip_address name')
doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN']
# Create Droplet
headers = {
'Content-Type': 'application/json',
'Authorization': f'Bearer {doat}',
}
data =
print('>>> Creating Server')
requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data)
print('>>> Server Created')
print('>>> Waiting for Server Stand up')
sleep(90)
print('>>> Getting Droplet Data')
params = (
('page', '1'),
('per_page', '10'),
)
get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params)
server_list = []
for d in get_droplets.json()['droplets']:
server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name']))
server_list = sorted(server_list, key=attrgetter('created'), reverse=True)
server_ip_address = server_list[0].ip_address
db_name = os.environ['DJANGO_PG_DB_NAME']
db_username = os.environ['DJANGO_PG_USER_NAME']
if server_ip_address != :
print('>>> Run server setup')
os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}')
print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ')
else:
print('WARNING: Running Server set up will destroy your current production server. Aborting process')
Earlier I said that I liked Digital Ocean because of it’s nice API for
interacting with it’s servers (i.e. Droplets). Here we start to see some.
The First part of the script uses my Digital Ocean Token and some input
parameters to create a Droplet via the Command Line. The `sleep(90)` allows
the process to complete before I try and get the IP address. Ninety seconds is
a bit longer than is needed, but I figure, better safe than sorry … I’m sure
that there’s a way to call to DO and ask if the just created droplet has an IP
address, but I haven’t figured it out yet.
After we create the droplet AND is has an IP address, we get it to pass to the
bash script `server-setup.sh`.
# server-setup.sh
#!/bin/bash
# Create the server on Digital Ocean
export SERVER=$1
# Take secret key as 2nd argument
if [[ -z ""$1"" ]]
then
echo ""ERROR: No value set for server ip address1""
exit 1
fi
echo -e ""\n>>> Setting up $SERVER""
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e ""\n>>> Updating apt sources""
apt-get -qq update
echo -e ""\n>>> Upgrading apt packages""
apt-get -qq upgrade
echo -e ""\n>>> Installing apt packages""
apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx
echo -e ""\n>>> Create User to Run Web App""
if getent passwd burningfiddle
then
echo "">>> User already present""
else
adduser --disabled-password --gecos """" burningfiddle
echo -e ""\n>>> Add newly created user to www-data""
adduser burningfiddle www-data
fi
echo -e ""\n>>> Make directory for code to be deployed to""
if [[ ! -d ""/home/burningfiddle/BurningFiddle"" ]]
then
mkdir /home/burningfiddle/BurningFiddle
else
echo "">>> Skipping Deploy Folder creation - already present""
fi
echo -e ""\n>>> Create VirtualEnv in this directory""
if [[ ! -d ""/home/burningfiddle/venv"" ]]
then
python3 -m venv /home/burningfiddle/venv
else
echo "">>> Skipping virtualenv creation - already present""
fi
# I don't think i need this anymore
echo "">>> Start and Enable gunicorn""
systemctl start gunicorn.socket
systemctl enable gunicorn.socket
EOF
./setup_nginx.sh $SERVER
./deploy_env_variables.sh $SERVER
./deploy.sh $SERVER
All of that stuff we did before, logging into the server and running commands,
we’re now doing via a script. What the above does is attempt to keep the
server in an idempotent state (that is to say you can run it as many times as
you want and you don’t get weird artifacts … if you’re a math nerd you may
have heard idempotent in Linear Algebra to describe the multiplication of a
matrix by itself and returning the original matrix … same idea here!)
The one thing that is new here is the part
ssh root@$SERVER /bin/bash << EOF
...
EOF
A block like that says, “take everything in between `EOF` and run it on the
server I just ssh’d into using bash.
At the end we run 3 shell scripts:
* `setup_nginx.sh`
* `deploy_env_variables.sh`
* `deploy.sh`
Let’s review these scripts
The script `setup_nginx.sh` copies several files needed for the `nginx`
service:
* `gunicorn.service`
* `gunicorn.sockets`
* `nginx.conf`
It then sets up a link between the `available-sites` and `enabled-sites` for
`nginx` and finally restarts `nginx`
# setup_nginx.sh
export SERVER=$1
export sitename=burningfiddle
scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/
scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/
scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename
ssh root@$SERVER /bin/bash << EOF
echo -e "">>> Set up site to be linked in Nginx""
ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
echo -e "">>> Restart Nginx""
systemctl restart nginx
echo -e "">>> Allow Nginx Full access""
ufw allow 'Nginx Full'
EOF
The script `deploy_env_variables.sh` copies environment variables. There are
packages (and other methods) that help to manage environment variables better
than this, and that is one of the enhancements I’ll be looking at.
This script captures the values of various environment variables (one at a
time) and then passes them through to the server. It then checks to see if
these environment variables exist on the server and will place them in the
`/etc/environment` file
export SERVER=$1
DJANGO_SECRET_KEY=printenv | grep DJANGO_SECRET_KEY
DJANGO_PG_PASSWORD=printenv | grep DJANGO_PG_PASSWORD
DJANGO_PG_USER_NAME=printenv | grep DJANGO_PG_USER_NAME
DJANGO_PG_DB_NAME=printenv | grep DJANGO_PG_DB_NAME
DJANGO_SUPERUSER_PASSWORD=printenv | grep DJANGO_SUPERUSER_PASSWORD
DJANGO_DEBUG=False
ssh root@$SERVER /bin/bash << EOF
if [[ ""\$DJANGO_SECRET_KEY"" != ""$DJANGO_SECRET_KEY"" ]]
then
echo ""DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY"" >> /etc/environment
else
echo "">>> Skipping DJANGO_SECRET_KEY - already present""
fi
if [[ ""\$DJANGO_PG_PASSWORD"" != ""$DJANGO_PG_PASSWORD"" ]]
then
echo ""DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_PASSWORD - already present""
fi
if [[ ""\$DJANGO_PG_USER_NAME"" != ""$DJANGO_PG_USER_NAME"" ]]
then
echo ""DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_USER_NAME - already present""
fi
if [[ ""\$DJANGO_PG_DB_NAME"" != ""$DJANGO_PG_DB_NAME"" ]]
then
echo ""DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME"" >> /etc/environment
else
echo "">>> Skipping DJANGO_PG_DB_NAME - already present""
fi
if [[ ""\$DJANGO_DEBUG"" != ""$DJANGO_DEBUG"" ]]
then
echo ""DJANGO_DEBUG=$DJANGO_DEBUG"" >> /etc/environment
else
echo "">>> Skipping DJANGO_DEBUG - already present""
fi
EOF
The `deploy.sh` calls two scripts itself:
# deploy.sh
#!/bin/bash
set -e
# Deploy Django project.
export SERVER=$1
#./scripts/backup-database.sh
./upload-code.sh
./install-code.sh
The final two scripts!
The `upload-code.sh` script uploads the files to the `deploy` folder of the
server while the `install-code.sh` script move all of the files to where then
need to be on the server and restart any services.
# upload-code.sh
#!/bin/bash
set -e
echo -e ""\n>>> Copying Django project files to server.""
if [[ -z ""$SERVER"" ]]
then
echo ""ERROR: No value set for SERVER.""
exit 1
fi
echo -e ""\n>>> Preparing scripts locally.""
rm -rf ../../deploy/*
rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' ../../BurningFiddle/* ../../deploy
echo -e ""\n>>> Copying files to the server.""
ssh root@$SERVER ""rm -rf /root/deploy/""
scp -r ../../deploy root@$SERVER:/root/
echo -e ""\n>>> Finished copying Django project files to server.""
And finally,
# install-code.sh
#!/bin/bash
# Install Django app on server.
set -e
echo -e ""\n>>> Installing Django project on server.""
if [[ -z ""$SERVER"" ]]
then
echo ""ERROR: No value set for SERVER.""
exit 1
fi
echo $SERVER
ssh root@$SERVER /bin/bash << EOF
set -e
echo -e ""\n>>> Activate the Virtual Environment""
source /home/burningfiddle/venv/bin/activate
cd /home/burningfiddle/
echo -e ""\n>>> Deleting old files""
rm -rf /home/burningfiddle/BurningFiddle
echo -e ""\n>>> Copying new files""
cp -r /root/deploy/ /home/burningfiddle/BurningFiddle
echo -e ""\n>>> Installing Python packages""
pip install -r /home/burningfiddle/BurningFiddle/requirements.txt
echo -e ""\n>>> Running Django migrations""
python /home/burningfiddle/BurningFiddle/manage.py migrate
echo -e ""\n>>> Creating Superuser""
python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true
echo -e ""\n>>> Load Initial Data""
python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json
echo -e ""\n>>> Collecting static files""
python /home/burningfiddle/BurningFiddle/manage.py collectstatic
echo -e ""\n>>> Reloading Gunicorn""
systemctl daemon-reload
systemctl restart gunicorn
EOF
echo -e ""\n>>> Finished installing Django project on server.""
",2021-02-21,automating-the-deployment,"We got everything set up, and now we want to automate the deployment.
Why would we want to do this you ask? Let’s say that you’ve decided that you
need to set up a test version of your site (what some might call UAT) on a new
server …
",Automating the deployment,https://www.ryancheley.com/2021/02/21/automating-the-deployment/
ryan,productivity,"In my last post [Auto Generating the Commit
Message](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-
message/) I indicated that this post I would ""throw it all together and to get
a spot where I can run one make command that will do all of this for me"".
I decided to take a brief detour though as I realized I didn't have a good way
to create a new post, i.e. the starting point wasn't automated!
In this post I'm going to go over how I create the start to a new post using
`Makefile` and the command `make newpost`
My initial idea was to create a new bash script (similar to the `tweet.sh`
file), but as a first iteration I went in a different direction based on this
post [How to Slugify Strings in Bash](https://blog.codeselfstudy.com/blog/how-
to-slugify-strings-in-bash/).
The command that the is finally arrived at in the post above was
newpost:
vim +':r templates/post.md' $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
which was **really** close to what I needed. My static site is set up a bit
differently and I'm not using `vim` (I'm using VS Code) to write my words.
The first change I needed to make was to remove the use of `vim` from the
command and instead use `touch` to create the file
newpost:
touch $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
The second was to change the file path for where to create the file. As I've
indicated previously, the structure of my content looks like this:
content
├── musings
├── pages
├── productivity
├── professional\ development
└── technology
giving me an updated version of the command that looks like this:
touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
When I run the command `make newpost title='Automating the file creation'
category='productivity'` I get a empty new files created.
Now I just need to populate it with the data.
There are seven bits of meta data that need to be added, but four of them are
the same for each post
Author: ryan
Tags:
Series: Remove if Not Needed
Status: draft
That allows me to have the `newpost` command look like this:
newpost:
touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Author: ryan"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Tags: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Series: Remove if Not Needed"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Status: draft"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
The remaining metadata to be added are:
* Title:
* Date
* Slug
Of these, `Date` and `Title` are the most straightforward.
`bash` has a command called `date` that can be formatted in the way I want
with `%F`. Using this I can get the date like this
echo ""Date: $$(date +%F)"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
For `Title` I can take the input parameter `title` like this:
echo ""Title: $${title}"" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
`Slug` is just `Title` but _slugified_. Trying to figure out how to do this is
how I found the [article](https://blog.codeselfstudy.com/blog/how-to-slugify-
strings-in-bash/) above.
Using a slightly modified version of the code that generates the file, we get
this:
printf ""Slug: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""$${title}"" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
One thing to notice here is that `printf`. I needed/wanted to `echo -n` but
`make` didn't seem to like that. [This StackOverflow
answer](https://stackoverflow.com/a/14121245) helped me to get a fix (using
`printf`) though I'm sure there's a way I can get it to work with `echo -n`.
Essentially, since this was a first pass, and I'm pretty sure I'm going to end
up re-writing this as a shell script I didn't want to spend **too** much time
getting a perfect answer here.
OK, with all of that, here's the entire `newpost` recipe I'm using now:
newpost:
touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Title: $${title}"" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Date: $$(date +%F)"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Author: ryan"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Tags: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
printf ""Slug: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""$${title}"" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Series: Remove if Not Needed"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
echo ""Status: draft"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md
This allows me to type `make newpost` and generate a new file for me to start
my new post in!1
1. When this post was originally published the slug command didn't account for making all of the text lower case. This was fixed in a subsequent [commit](https://github.com/ryancheley/ryancheley.com/commit/54f41680fdca4131735346764048d4e5fd206fd6) ↩︎
",2022-02-02,automating-the-file-creation,"In my last post [Auto Generating the Commit
Message](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-
message/) I indicated that this post I would ""throw it all together and to get
a spot where I can run one make command that will do all of this for me"".
I decided to take a brief detour though as I …
",Automating the file creation,https://www.ryancheley.com/2022/02/02/automating-the-file-creation/
ryan,technology,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had
_finally_ gotten Cron to automate the entire process of compiling the `h264`
files into an `mp4` and uploading it to [YouTube](https://www.youtube.com).
I hadn’t. And it took the better part of the last 2 weeks to figure out what
the heck was going on.
Part of what I wrote before was correct. I wasn’t able to read the
`client_secrets.json` file and that was leading to an error.
I was _not_ correct on the creation of the `create_mp4.sh` though.
The reason I got it to run automatically that night was because I had, in my
testing, created the `create_mp4.sh` and when cron ran my `run_script.sh` it
was able to use what was already there.
The next night when it ran, the `create_mp4.sh` was already there, but the
`h264` files that were referenced in it weren’t. This lead to no video being
uploaded and me being confused.
The issue was that cron was unable to run the part of the script that
generates the script to create the `mp4` file.
I’m close to having a fix for that, but for now I did the most inelegant thing
possible. I broke up the script in cron so it looks like this:
00 06 * * * /home/pi/Documents/python_projects/cleanup.sh
10 19 * * * /home/pi/Documents/python_projects/create_script_01.sh
11 19 * * * /home/pi/Documents/python_projects/create_script_02.sh >> $HOME/Documents/python_projects/create_mp4.sh 2>&1
12 19 * * * /home/pi/Documents/python_projects/create_script_03.sh
13 19 * * * /home/pi/Documents/python_projects/run_script.sh
At 6am every morning the `cleanup.sh` runs and removes the `h264` files, the
`mp4` file and the `create_mp4.sh` script
At 7:10pm the
‘[header](https://gist.github.com/ryancheley/5b11cc15160f332811a3b3d04edf3780)’
for the `create_mp4.sh` runs. At 7:11pm the
‘[body](https://gist.github.com/ryancheley/9e502a9f1ed94e29c4d684fa9a8c035a)’
for `create_mp4.sh` runs. At 7:12pm the
‘[footer](https://gist.github.com/ryancheley/3c91a4b27094c365b121a9dc694c3486)’
for `create_mp4.sh` runs.
Finally at 7:13pm the `run_script.sh` compiles the `h264` files into an `mp4`
and uploads it to YouTube.
Last night while I was at a School Board meeting the whole process ran on it’s
own. I was super pumped when I checked my YouTube channel and saw that the May
1 hummingbird video was there and I didn’t have to do anything.
",2018-05-02,automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had
_finally_ gotten Cron to automate the entire process of compiling the `h264`
files into an `mp4` and uploading it to [YouTube](https://www.youtube.com).
I hadn’t. And it took the better part of the last 2 weeks to figure out what …
",Automating the Hummingbird Video Upload to YouTube or How I finally got Cron to do what I needed it to do but in the ugliest way possible,https://www.ryancheley.com/2018/05/02/automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible/
ryan,microblog,"After a week long hiatus from swimming I got back to it today. I only swam
1550 yards but it was a good swim. I kind of felt the need to take it a bit
easy today given the week long break, and I needed to be at the office a bit
early to get ready to help onboard a new employee. While it wasn't a great
distance, or a great time (2'45"" 100 yd pace) it still felt really good to be
back in the pool.
I am again back to feeling 'pretty sleepy' early in the evening which I'm
hoping will rid me of the [insomnia](2025/02/22/insomnia/) from last week.
One of the best / weirdest parts of the swim is the honking from the geese.
About 25 minutes into my swim they seem to wake up and just start honking at
each other ... or maybe at me ... or maybe at the people walking around. Not
really sure.
It is slightly off putting. They are **very** loud, but it also makes me
giggle ... so that's something.
",2025-02-24,back-in-the-pool,"After a week long hiatus from swimming I got back to it today. I only swam
1550 yards but it was a good swim. I kind of felt the need to take it a bit
easy today given the week long break, and I needed to be at the office …
",Back in the pool,https://www.ryancheley.com/2025/02/24/back-in-the-pool/
ryan,musings,"Last weekend I watched both games 7 of the NBA conference finals. I have no
particular affinity for the NBA (I prefer the [Madness in March associated
with the
NCAA](https://en.m.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament))
but I figured with 2 game 7s it might be interesting to watch. I was not
wrong.
On Sunday night Cleveland was hosted by Boston in a rematch of a game 7 from
2010. One of only 2 game 7s that LeBron James had lost.
This game had all the makings of what you would want a game 7 to be. A young
upstart rookie (Tatum) with something to prove. A veteran (James), also with
something to prove.
What really stuck our for me, for this game, was what happened at the 6:45
mark in the fourth quarter. Tatum dunked on LeBron (posterized is the term
[ESPN](http://www.espn.com/video/clip?id=23627416) used) to put the score at
71-69 Cleveland. What happened next though, I think, is why the Cavs won the
game.
Tatum proceeded to bump his chest up against the back of LeBron’s shoulder,
like a small child might run up to a big kid when he did something amazing to
be like, “Look at me ... I’m a big kid too!”
LeBron just stood there and looked at Tatum with incredulity. The announcers
seemed to enjoy the specticle more than they should have. But LeBron just
stood there, the Boston crowd cheering wildly at what their young rookie had
just done. To dunk over LeBron, arguably one of the greatest, in a game 7?
This is the thing that legends are made of.
But while the crowd and the announcers saw James look like he was a mere
mortal ... what I saw was the game turning around. The look on James’ face
wasn’t one of ‘damn ... that kid just dunked on me. It was, “Damn ... now I’m
going to get mine and I have a punk to show how this game is really played.”
From that point on the Cavs outscored the Celtics 16-10 ... not a huge margin,
but a margin enough to win. What the score doesn’t show is the look of
determination on LeBron’s face as he carried his team to the NBA Finals. Not
because he scored all 16 points (he _only_ scored 7) but because he checked
his ego at the door and worked to make his team better than the other team. In
short, he was the better team mate than Tatum in those last minutes and that’s
why the Cavs are in the Finals and the Celtics aren’t.
Tatum’s reaction to dunking on LeBron is understandable. Hell, if I had done
something like that when I was his age, I would have pumped my chest up too.
But it the patience and reservedness (that perhaps come with age) that make
you a great player or team member. You don’t really want to rile up a great
player because that’s the only reason they need to whoop your butt.
Perhaps Tatum will learn this lesson. Perhaps he won’t.
Because you see, acting like a a little kid isn’t just the right of a rookie.
James Harden pulled some immature shenanigans too in his team’s loss to the
Warriors. At one point, with the Rockets up 59-53 with 6:13 in the 3rd, Harden
when for a layup and was knocked down ... accidentally in my opinion.
When a player from the Warriors tried to help him up he just sat there and
then flailed his arms until one of his teammates can to help him up. Big man
there Harden.
By the end of the 3rd quarter the Rockets were down 76-69. By the end of the
game they’ve lost 101-92.
You see, when it comes down to it a great teammate will do what’s best for the
team, and not do what’s best for their ego. It doesn’t seem to matter, old or
young, rookie or veteran, not having the ability to control your emotions at
key points in a game (or in life) can be more costly than you realize.
Sometimes it’s game 7 of the NBA Conference finals, sometimes it’s just a pick
up game with some friends at the park, but in either case, being a good
teammate requires checking your ego at the door and working to be the best
team mate you can be, not being the best player on the court.
To put it another way, being the smartest person in the room doesn’t make you
the most influential person in the room, and when it comes down to moving
ahead, being influential trumps being smart.
",2018-06-08,basketball-conference-finals-or-how-the-actions-of-one-person-can-fire-up-the-other-team-and-lead-them-to-win,"Last weekend I watched both games 7 of the NBA conference finals. I have no
particular affinity for the NBA (I prefer the [Madness in March associated
with the
NCAA](https://en.m.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament))
but I figured with 2 game 7s it might be interesting to watch. I was not
wrong.
On Sunday night …
",Basketball Conference Finals OR How the actions of one person can fire up the other team and lead them to win,https://www.ryancheley.com/2018/06/08/basketball-conference-finals-or-how-the-actions-of-one-person-can-fire-up-the-other-team-and-lead-them-to-win/
ryan,musings,"[Healthcare Big Data Success Starts with the Right
Questions](http://healthitanalytics.com/news/healthcare-big-data-success-
starts-with-the-right-questions)
> > The last major piece of the puzzle is the ability to pick projects that
> can bear fruit quickly, Ibrahim added, in order to jumpstart enthusiasm and
> secure widespread support.
* * *
[Healthcare Big Data Success Starts with the Right
Questions](http://healthitanalytics.com/news/healthcare-big-data-success-
starts-with-the-right-questions)
> > Moving from measurement to management – and from management to improvement
> – was the next challenge, he added.
* * *
[Healthcare Big Data Success Starts with the Right
Questions](http://healthitanalytics.com/news/healthcare-big-data-success-
starts-with-the-right-questions)
> > Each question builds upon the previous answer to create a comprehensive
> portrait of how data flows throughout a segment of the organization. Ibrahim
> paraphrased the survey like so:
• Do we have the data and analytics to connect to the important organizations
in each of these three domains?
• If we have the data, is it integrated in a meaningful way? Can we look at
that data and tell meaningful stories about what is happening, where it’s
happening, and why it’s happening?
• Even if we have the data and it’s integrated meaningfully and we can start
to tell that story, do we apply some statistical methodology to the data where
we aggregate and report on it?
• If we have the data, and it can tell us a story, and we use good analytics
methodology, are we able to present it in an understandable way to all our
stakeholders, from the front-line clinician all the way up to the chief
executive?
• Are the analytics really meaningful? Does the information help to make
decisions? Is it rich enough that we can really figure out why something is
happening?
• Lastly, even if we have accomplished all these other goals, can we deliver
the information in a timely fashion to the people who need this data to do
their jobs?
",2017-01-07,big-data-and-healthcare-thoughts,"[Healthcare Big Data Success Starts with the Right
Questions](http://healthitanalytics.com/news/healthcare-big-data-success-
starts-with-the-right-questions)
> > The last major piece of the puzzle is the ability to pick projects that
> can bear fruit quickly, Ibrahim added, in order to jumpstart enthusiasm and
> secure widespread support.
* * *
[Healthcare Big Data Success Starts with the Right
Questions](http://healthitanalytics.com/news/healthcare-big-data-success-
starts-with-the-right-questions)
> > Moving from measurement …
",Big Data and Healthcare - thoughts,https://www.ryancheley.com/2017/01/07/big-data-and-healthcare-thoughts/
Ryan Cheley,pages,"# Speaking / Podcasts
1. Speaker at PyCascades 2025: [Error Culture](https://youtu.be/FBMg2Bp4I-Q)
2. Speaker at DjangoCon US 2024: [Error Culture](https://2024.djangocon.us/talks/error-culture/)
3. Speaker at DjanogCon US 2023: [Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO)
4. Guest on [Test & Code episode 183](https://testandcode.com/183) where I spoke about the ""challenges of managing software teams, and how to handle them"" and other skills
# OSS Work
1. Contributed to the following open source projects:
* [DjangoProject.com](https://www.djangoproject.com) with [PR](https://github.com/django/django/pull/12128) which I wrote about [here](https://www.ryancheley.com/2019/12/07/my-first-commit-to-an-open-source-project-django/)
* [Django](https://github.com/django/django/) with [PR](https://github.com/django/django/pull/16243) which I wrote about [here](https://www.ryancheley.com/2022/11/12/contributing-to-django/)
* [DjangoPackages.org](https://djangopackages.org)
* Limited TextField size to help eliminate potential for Spam, closing a 10 year old issue with [PR](https://github.com/djangopackages/djangopackages/commit/5463558eb5f6a10978158946c7867725b57d14dd)
* Added support for emoji with [PR](https://github.com/djangopackages/djangopackages/commit/051c5ca14d25cb39d7d56ea63e4cfb317d78c13c)
* Added Support for [Emojificate](https://pypi.org/project/emojificate/) with [PR](https://github.com/djangopackages/djangopackages/pull/849) to make emoji accessible ""with fallback images, alt text, title text and aria labels to represent emoji in HTML""
* [Tryceratops](https://pypi.org/project/tryceratops/) with [PR](https://github.com/guilatrova/tryceratops/commits?author=ryancheley) which I wrote about [here](https://www.ryancheley.com/2021/08/07/contributing-to-tryceratops/)
* [Wagtail-Resume](https://pypi.org/project/wagtail-resume/) with [PR](https://github.com/adinhodovic/wagtail-resume/pull/32)
* [Diagrams](https://pypi.org/project/diagrams/) with [PR](https://github.com/mingrammer/diagrams/pull/426)
* [MLB-StatsAPI](https://pypi.org/project/MLB-StatsAPI/) with [PR](https://github.com/toddrob99/MLB-StatsAPI/pull/41)
* [django-sql-dashboard](https://pypi.org/project/django-sql-dashboard/) with [PR](https://github.com/simonw/django-sql-dashboard/pull/138) which I wrote about [here](https://www.ryancheley.com/2021/07/09/contributing-to-django-sql-dashboard/)
* [dnspython](https://pypi.org/project/dnspython/) with [PR](https://github.com/rthalley/dnspython/issues/775)
* [markdown-to-sqlite](https://pypi.org/project/markdown-to-sqlite/) with [PR](https://github.com/simonw/markdown-to-sqlite/pull/3)
2. Author and Maintainer of the Open Source Projects:
* [toggl-to-sqlite](https://pypi.org/project/toggl-to-sqlite/)
* [the-well-maintained-test](https://pypi.org/project/the-well-maintained-test/) which I wrote about [here](https://cur.at/4n0KtYP?m=web)
* The package was mentioned in [Django News Issue #104](https://django-news.com/issues/104)
* The package is featured in the [Rich Gallery](https://www.textualize.io/rich/gallery/4)
* [pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/) which I wrote about [here](https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/)
3. One of the Maintainers of [Django Packages](https://djangopackages.org) with [Jeff Triplett](https://github.com/jefftriplett) and [Maksudul Haque](https://fosstodon.org/@saadmk11)
4. Member of the [Python Software Foundation](https://www.python.org/users/rcheley/)
5. Member of the [Django Software Foundation](https://www.djangoproject.com/foundation/minutes/2021/nov/11/dsf-board-monthly-meeting/)
6. Navigator for [Djangonaut.space](https://djangonaut.space)
* Session 1 (Jan 15, 2024 - Mar 11, 2024)
* Session 2 (Jun 17, 2024 - Aug 12, 2024)
* Session 4 (Feb 17, 2025 - Apr 13, 2025)
7. [Django Commons](https://github.com/django-commons/) admin
# Certifications
1. [Google Cloud Platform Cloud Architect](https://www.credential.net/f8e9ee03-67cb-48e3-8d3e-d824afc6265b?key=38397759fd07a2225d694c34d34f994bcdde3b9922962d865e4e9c6df478f139)
2. Certified EDI Academy Professional
# Guest Writing
1. Have been published on the [PyBites Blog](https://pybit.es/author/ryancheley/)
# Other
1. Ran 13 half marathons in 13 months
* SkyBorne, December 2013
* Carlsbad, January 2014
* Palm Springs, February 2014
* Zion National Park, March 2014
* La Jolla, April 2014
* Menifee, May 2014
* San Diego Rock 'n Roll, June 2014
* Fourth of July Virtual, July 2014
* America's Finest City, August 2014
* Ventura, September 2014
* San Luis Obispo, October 2014
* Santa Barbara, November 2014
* SkyBorne, December 2014
2. Member of [Bermuda Dunes Community Council](https://rivco4.org/Councils/Community-Councils), September 2009 - June 2013
3. Created a [Django site to track Stadiums](https://stadiatracker.com/Pages/home) that I've visited
",2025-04-02,brag-doc,"# Speaking / Podcasts
1. Speaker at PyCascades 2025: [Error Culture](https://youtu.be/FBMg2Bp4I-Q)
2. Speaker at DjangoCon US 2024: [Error Culture](https://2024.djangocon.us/talks/error-culture/)
3. Speaker at DjanogCon US 2023: [Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO)
4. Guest on [Test & Code episode 183](https://testandcode.com/183) where I spoke about the ""challenges …
",Brag Doc,https://www.ryancheley.com/pages/brag-doc/
ryan,microblog,"One of the great things about living in the desert of Southern california is
that during the winter time the day time temps are typically in the high 60s
or low 70s. This makes outdoor activities amazing experiences. What's even
better is that every January / February the California Winter League gears up
and my wife and will spend Saturday mornings (and sometimes afternoons)
watching baseball under the gloriously beautiful sky.
The best part is that the teams are filled with high school, and college
hopefuls, so it's baseball in kind of its rawest form. better than little
league, but not quite as good as Pro ball. And since it's a winter league with
essentially made up teams, my wife and I will pick a team to root for and then
spend the ensuing 7 innings trash talking each other as 'our' team is winning.
Another great part is that it's a relatively inexpensive outing. Each Saturday
two games are played, and for $10 for each adult you get access to both games.
The games are only 7 innings long but they use wooden bats instead of aluminum
bats so it feels more like pro ball than college or high school ball.
And just because it's an instructional league doesn't mean there aren't some
great plays made. Just today I saw a hit stealing diving catch made by a
shortstop, and a diving catch into foul territory made by a right fielder that
ran faster than I really thought was possible.
",2025-02-01,california-winter-league,"One of the great things about living in the desert of Southern california is
that during the winter time the day time temps are typically in the high 60s
or low 70s. This makes outdoor activities amazing experiences. What's even
better is that every January / February the California Winter League …
",California Winter League,https://www.ryancheley.com/2025/02/01/california-winter-league/
Ryan Cheley,pages," * Wallet
* iPhone 14
* Apple Watch Series 8 45mm
* iPad Pro 12.9 2021
* [Tom Binh Synik 30](https://www.tombihn.com/products/synik-30?variant=42599481901245)
",2025-04-02,carry," * Wallet
* iPhone 14
* Apple Watch Series 8 45mm
* iPad Pro 12.9 2021
* [Tom Binh Synik 30](https://www.tombihn.com/products/synik-30?variant=42599481901245)
",Carry,https://www.ryancheley.com/pages/carry/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/)
`ArchiveIndexView`
> > Top-level archive of date-based items.
## Attributes
There are 20 attributes that can be set for the `ArchiveIndexView` but most of
them are based on ancestral Classes of the CBV so we won’t be going into them
in Detail.
### DateMixin Attributes
* allow_future: Defaults to False. If set to True you can show items that have dates that are in the future where the future is anything after the current date/time on the server.
* date_field: the field that the view will use to filter the date on. If this is not set an error will be generated
* uses_datetime_field: Convert a date into a datetime when the date field is a DateTimeField. When time zone support is enabled, `date` is assumed to be in the current time zone, so that displayed items are consistent with the URL.
### BaseDateListView Attributes
* allow_empty: Defaults to `False`. This means that if there is no data a `404` error will be returned with the message
> > `No __str__ Available` where ‘`__str__`’ is the display of your model
* date_list_period: This attribute allows you to break down by a specific period of time (years, months, days, etc.) and group your date driven items by the period specified. See below for implementation
For `year`
views.py
date_list_period='year'
urls.py
Nothing special needs to be done
\.html
{% block content %}
{% for date in date_list %}
{{ date.year }}
{% for p in person %}
{% if date.year == p.post_date.year %}
{% endblock %}
Will render:

For `month`
views.py
date_list_period='month'
urls.py
Nothing special needs to be done
\.html
{% block content %}
{% for date in date_list %}
{{ date.month }}
{% for p in person %}
{% if date.month == p.post_date.month %}
{% endblock %}
Will render:

### BaseArchiveIndexView Attributes
* context_object_name: Name the object used in the template. As stated before, you’re going to want to do this so you don’t hate yourself (or have other developers hate you).
## Other Attributes
### MultipleObjectMixin Attributes
These attributes were all reviewed in the [ListView](/cbv-listview.html) post
* model = None
* ordering = None
* page_kwarg = 'page'
* paginate_by = None
* paginate_orphans = 0
* paginator_class = \
* queryset = None
### TemplateResponseMixin Attributes
This attribute was reviewed in the [ListView](/cbv-listview.html) post
* content_type = None
### ContextMixin Attributes
This attribute was reviewed in the [ListView](/cbv-listview.html) post
* extra_context = None
### View Attributes
This attribute was reviewed in the [View](/cbv-view.html) post
* http_method_names = ['get', 'post', 'put', 'patch', 'delete', 'head', 'options', 'trace']
### TemplateResponseMixin Attributes
These attributes were all reviewed in the [ListView](/cbv-listview.html) post
* response_class = \
* template_engine = None
* template_name = None
## Diagram
A visual representation of how `ArchiveIndexView` is derived can be seen here:

## Conclusion
With date driven data (articles, blogs, etc.) The `ArchiveIndexView` is a
great CBV and super easy to implement.
",2019-11-24,cbv-archiveindexview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/)
`ArchiveIndexView`
> > Top-level archive of date-based items.
## Attributes
There are 20 attributes that can be set for the `ArchiveIndexView` but most of
them are based on ancestral Classes of the CBV so we won’t be going into them
in Detail.
### DateMixin Attributes
* allow_future: Defaults to …
",CBV - ArchiveIndexView,https://www.ryancheley.com/2019/11/24/cbv-archiveindexview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/)
`BaseListView`
> > A base view for displaying a list of objects.
And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-
based-views/generic-display/#listview):
> > A base view for displaying a list of objects. It is not intended to be
> used directly, but rather as a parent class of the
> django.views.generic.list.ListView or other views representing lists of
> objects.
Almost all of the functionality of `BaseListView` comes from the
`MultipleObjectMixin`. Since the Django Docs specifically say don’t use this
directly, I won’t go into it too much.
## Diagram
A visual representation of how `BaseListView` is derived can be seen here:

## Conclusion
Don’t use this. It should be subclassed into a usable view (a la `ListView`).
There are many **Base** views that are ancestors for other views. I’m not
going to cover any more of them going forward **UNLESS** the documentation
says there’s a specific reason to.
",2019-11-17,cbv-baselistview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/)
`BaseListView`
> > A base view for displaying a list of objects.
And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-
based-views/generic-display/#listview):
> > A base view for displaying a list of objects. It is not intended to be
> used directly, but rather as a parent class of the
> django.views.generic.list.ListView …
",CBV - BaseListView,https://www.ryancheley.com/2019/11/17/cbv-baselistview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/)
`CreateView`
> > View for creating a new object, with a response rendered by a template.
## Attributes
Three attributes are required to get the template to render. Two we’ve seen
before (`queryset` and `template_name`). The new one we haven’t see before is
the `fields` attribute.
* fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields
## Example
views.py
queryset = Person.objects.all()
fields = '__all__'
template_name = 'rango/person_form.html'
urls.py
path('create_view/', views.myCreateView.as_view(), name='create_view'),
\.html
{% extends 'base.html' %}
{% block title %}
{{ title }}
{% endblock %}
{% block content %}
{{ type }} View
{% endblock %}
## Diagram
A visual representation of how `CreateView` is derived can be seen here:

## Conclusion
A simple way to implement a form to create items for a model. We’ve completed
step 1 for a basic **C** RUD application.
",2019-12-01,cbv-createview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/)
`CreateView`
> > View for creating a new object, with a response rendered by a template.
## Attributes
Three attributes are required to get the template to render. Two we’ve seen
before (`queryset` and `template_name`). The new one we haven’t see before is
the `fields` attribute …
",CBV - CreateView,https://www.ryancheley.com/2019/12/01/cbv-createview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/)
`DayArchiveView`
> > List of objects published on a given day.
## Attributes
There are six new attributes to review here … well really 3 new ones and then
a formatting attribute for each of these 3:
* day: The day to be viewed
* day_format: The format of the day to be passed. Defaults to `%d`
* month: The month to be viewed
* month_format: The format of the month to be passed. Defaults to `%b`
* year: The year to be viewed
* year_format: The format of the year to be passed. Defaults to `%Y`
## Required Attributes
* day
* month
* year
* date_field: The field that holds the date that will drive every else. We saw this in [ArchiveIndexView](/cbv-archiveindexview)
Additionally you also need `model` or `queryset`
The `day`, `month`, and `year` can be passed via `urls.py` so that they do’t
need to be specified in the view itself.
## Example:
views.py
class myDayArchiveView(DayArchiveView):
month_format = '%m'
date_field = 'post_date'
queryset = Person.objects.all()
context_object_name = 'person'
paginate_by = 10
page_kwarg = 'name'
urls.py
path('day_archive_view////', views.myDayArchiveView.as_view(), name='day_archive_view'),
\_archiveday.html
{% extends 'base.html' %}
{% endblock %}
## Diagram
A visual representation of how `DayArchiveView` is derived can be seen here:

## Conclusion
If you have date based content a great tool to use and again super easy to
implement.
There are other time based CBV for Today, Date, Week, Month, and Year. They
all do the same thing (generally) so I won’t review those.
",2019-11-27,cbv-dayarchiveview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/)
`DayArchiveView`
> > List of objects published on a given day.
## Attributes
There are six new attributes to review here … well really 3 new ones and then
a formatting attribute for each of these 3:
* day: The day to be viewed
* day_format: The format of the day …
",CBV - DayArchiveView,https://www.ryancheley.com/2019/11/27/cbv-dayarchiveview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/)
`DeleteView`
> > View for deleting an object retrieved with self.get*object(), with a *
response rendered by a template.
## Attributes
There are no new attributes, but 2 that we’ve seen are required: (1)
`queryset` or `model`; and (2) `success_url`
## Example
views.py
class myDeleteView(DeleteView):
queryset = Person.objects.all()
success_url = reverse_lazy('rango:list_view')
urls.py
path('delete_view/', views.myDeleteView.as_view(), name='delete_view'),
\.html
Below is just the form that would be needed to get the delete to work.
## Diagram
A visual representation of how `DeleteView` is derived can be seen here:

## Conclusion
As far as implementations, the ability to add a form to delete data is about
the easiest thing you can do in Django. It requires next to nothing in terms
of implementing. We now have step 4 of a CRUD app!
",2019-12-11,cbv-deleteview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/)
`DeleteView`
> > View for deleting an object retrieved with self.get*object(), with a *
response rendered by a template.
## Attributes
There are no new attributes, but 2 that we’ve seen are required: (1)
`queryset` or `model`; and (2) `success_url`
## Example
views.py
class myDeleteView(DeleteView …
",CBV - DeleteView,https://www.ryancheley.com/2019/12/11/cbv-deleteview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/)
`DetailView`
> > Render a ""detail"" view of an object.
>>
>> By default this is a model instance looked up from `self.queryset`, but the
view will support display of _any_ object by overriding `self.get_object()`.
There are 7 attributes for the `DetailView` that are derived from the
`SingleObjectMixin`. I’ll talk about five of them and the go over the ‘slug’
fields in their own section.
* context_object_name: similar to the `ListView` it allows you to give a more memorable name to the object in the template. You’ll want to use this if you want to have future developers (i.e. you) not hate you
* model: similar to the `ListView` except it only returns a single record instead of all records for the model based on a filter parameter passed via the `slug`
* pk_url_kwarg: you can set this to be something other than pk if you want … though I’m not sure why you’d want to
* query_pk_and_slug: The Django Docs have a pretty clear explanation of what it does
> > This attribute can help mitigate [insecure direct object
> reference](https://www.owasp.org/index.php/Top_10_2013-A4-Insecure_Direct_Object_References)
> attacks. When applications allow access to individual objects by a
> sequential primary key, an attacker could brute-force guess all URLs;
> thereby obtaining a list of all objects in the application. If users with
> access to individual objects should be prevented from obtaining this list,
> setting query _pk_ and*slug to True will help prevent the guessing of URLs
> as each URL will require two correct, non-sequential arguments. Simply using
> a unique slug may serve the same purpose, but this scheme allows you to have
> non-unique slugs. *
* queryset: used to return data to the view. It will supersede the value supplied for `model` if both are present
## The Slug Fields
There are two attributes that I want to talk about separately from the others:
* slug_field
* slug_url_kwarg
If neither `slug_field` nor `slug_url_kwarg` are set the the url must contain
``. The url in the template needs to include `o.id`
### views.py
There is nothing to show in the `views.py` file in this example
### urls.py
path('detail_view/', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.id %}
If `slug_field` is set but `slug_url_kwarg` is NOT set then the url can be
``. The url in the template needs to include `o.`
### views.py
class myDetailView(DetailView):
slug_field = 'first_name'
### urls.py
path('detail_view//', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.first_name %}
If `slug_field` is not set but `slug_url_kwarg` is set then you get an error.
Don’t do this one.
If both `slug_field` and `slug_url_kwarg` are set then the url must be
`` where value is what the parameters are set to. The url in the
template needs to include `o.`
### views.py
class myDetailView(DetailView):
slug_field = 'first_name'
slug_url_kwarg = 'first_name'
### urls.py
path('detail_view//', views.myDetailView.as_view(), name='detail_view'),
### \.html
{% url 'rango:detail_view' o.first_name %}
## Diagram
A visual representation of how `DetailView` is derived can be seen here:

## Conclusion
I think the most important part of the `DetailView` is to remember its
relationship to `ListView`. Changes you try to implement on the Class for
`DetailView` need to be incorporated into the template associated with the
`ListView` you have.
",2019-11-24,cbv-detailview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/)
`DetailView`
> > Render a ""detail"" view of an object.
>>
>> By default this is a model instance looked up from `self.queryset`, but the
view will support display of _any_ object by overriding `self.get_object()`.
There are 7 attributes for the `DetailView` that are derived from the …
",CBV - DetailView,https://www.ryancheley.com/2019/11/24/cbv-detailview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/)
`FormView`
> > A view for displaying a form and rendering a template response.
## Attributes
The only new attribute to review this time is `form_class`. That being said,
there are a few implementation details to cover
* form_class: takes a Form class and is used to render the form on the `html` template later on.
## Methods
Up to this point we haven’t really needed to override a method to get any of
the views to work. This time though, we need someway for the view to verify
that the data is valid and then save it somewhere.
* form_valid: used to verify that the data entered is valid and then saves to the database. Without this method your form doesn’t do anything
## Example
This example is a bit more than previous examples. A new file called
`forms.py` is used to define the form that will be used.
forms.py
from django.forms import ModelForm
from rango.models import Person
class PersonForm(ModelForm):
class Meta:
model = Person
exclude = [
'post_date',
]
views.py
class myFormView(FormView):
form_class = PersonForm
template_name = 'rango/person_form.html'
extra_context = {
'type': 'Form'
}
success_url = reverse_lazy('rango:list_view')
def form_valid(self, form):
person = Person.objects.create(
first_name=form.cleaned_data['first_name'],
last_name=form.cleaned_data['last_name'],
post_date=datetime.now(),
)
return super(myFormView, self).form_valid(form)
urls.py
path('form_view/', views.myFormView.as_view(), name='form_view'),
\.html
{{ type }} View
{% if type != 'Update' %}
## Diagram
A visual representation of how `FormView` is derived can be seen here:

## Conclusion
I really struggled with understanding _why_ you would want to implement
`FormView`. I found this explanation on
[Agiliq](https://www.agiliq.com/blog/2019/01/django-formview/) and it helped
me grok the why:
> > FormView should be used when you need a form on the page and want to
> perform certain action when a valid form is submitted. eg: Having a contact
> us form and sending an email on form submission.
>>
>> CreateView would probably be a better choice if you want to insert a model
instance in database on form submission.
While my example above works, it’s not the intended use of `FormView`. Really,
it’s just an implementation of `CreateView` using `FormView`
",2019-12-04,cbv-formview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/)
`FormView`
> > A view for displaying a form and rendering a template response.
## Attributes
The only new attribute to review this time is `form_class`. That being said,
there are a few implementation details to cover
* form_class: takes a Form class and is used to render the …
",CBV - FormView,https://www.ryancheley.com/2019/12/04/cbv-formview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/)
`ListView`:
> > Render some list of objects, set by `self.model` or `self.queryset`.
>>
>> `self.queryset` can actually be any iterable of items, not just a queryset.
There are 16 attributes for the `ListView` but only 2 types are required to
make the page return something other than a
[500](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#5xx_Server_errors)
error:
* Data
* Template Name
## Data Attributes
You have a choice of either using `Model` or `queryset` to specify **what**
data to return. Without it you get an error.
The `Model` attribute gives you less control but is easier to implement. If
you want to see ALL of the records of your model, just set
model = ModelName
However, if you want to have a bit more control over what is going to be
displayed you’ll want to use `queryset` which will allow you to add methods to
the specified model, ie `filter`, `order_by`.
queryset = ModelName.objects.filter(field_name='filter')
If you specify both `model` and `queryset` then `queryset` takes precedence.
## Template Name Attributes
You have a choice of using `template_name` or `template_name_suffix`. The
`template_name` allows you to directly control what template will be used. For
example, if you have a template called `list_view.html` you can specify it
directly in `template_name`.
`template_name_suffix` will calculate what the template name should be by
using the app name, model name, and appending the value set to the
`template_name_suffix`.
In pseudo code:
templates//_.html
For an app named `rango` and a model named `person` setting
`template_name_suffix` to `_test` would resolve to
templates/rango/person_test.html
## Other Attributes
If you want to return something interesting you’ll also need to specify
* allow_empty: The default for this is true which allows the page to render if there are no records. If you set this to `false` then returning no records will result in a 404 error
* context_object_name: allows you to give a more memorable name to the object in the template. You’ll want to use this if you want to have future developers (i.e. you) not hate you
* ordering: allows you to specify the order that the data will be returned in. The field specified must exist in the `model` or `queryset` that you’ve used
* page_kwarg: this indicates the name to use when going from page x to y; defaults to `name` but overriding it to something more sensible can be helpful for SEO. For example you can use `name` instead of `page` if you’ve got a page that has a bunch of names

* paginate_by: determines the maximum number of records to return on any page.
* paginate_orphans: number of items to add to the last page; this helps keep pages with singletons (or some other small number
* paginator_class: class that defines several of the attributes above. Don’t mess with this unless you have an actual reason to do so. Also … you’re not a special snowflake, there are literal dragons in down this road. Go back!
## Diagram
A visual representation of how `ListView` is derived can be seen here:

## Conclusion
The `ListView` CBV is a powerful and highly customizable tool that allows you
to display the data from a single model quite easily.
",2019-11-17,cbv-listview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/)
`ListView`:
> > Render some list of objects, set by `self.model` or `self.queryset`.
>>
>> `self.queryset` can actually be any iterable of items, not just a queryset.
There are 16 attributes for the `ListView` but only 2 types are required to
make the page return something …
",CBV - ListView,https://www.ryancheley.com/2019/11/17/cbv-listview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/)
`LoginView`
> > Display the login form and handle the login action.
## Attributes
* authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just account creation, i.e. account verification must be done as well. For details see [example](https://simpleisbetterthancomplex.com/tips/2016/08/12/django-tip-10-authentication-form-custom-login-policy.html) by Vitor Freitas for more details
* form_class: The form that will be used by the template created. Defaults to Django’s `AuthenticationForm`
* redirect_authenticated_user: If the user is logged in then when they attempt to go to your login page it will redirect them to the `LOGIN_REDIRECT_URL` configured in your `settings.py`
* redirect_field_name: similar idea to updating what the `next` field will be from the `DetailView`. If this is specified then you’ll most likely need to create a custom login template.
* template_name: The default value for this is `registration\login.html`, i.e. a file called `login.html` in the `registration` directory of the `templates` directory.
There are no required attributes for this view, which is nice because you can
just add `pass` to the view and you’re set (for the view anyway you still need
an html file).
You’ll also need to update `settings.py` to include a value for the
`LOGIN_REDIRECT_URL`.
### Note on redirect_field_name
Per the [Django
Documentation](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.decorators.login_required):
> > If the user isn’t logged in, redirect to settings.LOGIN*URL, passing the
> current absolute path in the query string. Example:
> /accounts/login/?next=/polls/3/. *
If `redirect_field_name` is set then the URL would be:
/accounts/login/?=/polls/3
Basically, you only use this if you have a pretty good reason.
## Example
views.py
class myLoginView(LoginView):
pass
urls.py
path('login_view/', views.myLoginView.as_view(), name='login_view'),
registration/login.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% endblock %}
settings.py
LOGIN_REDIRECT_URL = '//'
## Diagram
A visual representation of how `LoginView` is derived can be seen here:

## Conclusion
Really easy to implement right out of the box but allows some nice
customization. That being said, make those customizations IF you need to, not
just because you think you want to.
",2019-12-15,cbv-loginview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/)
`LoginView`
> > Display the login form and handle the login action.
## Attributes
* authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just …
",CBV - LoginView,https://www.ryancheley.com/2019/12/15/cbv-loginview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/)
`LogoutView`
> > Log out the user and display the 'You are logged out' message.
## Attributes
* next_page: redirects the user on logout.
* [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the given GET parameter is passed. 1
* template_name: defaults to `registration\logged_out.html`. Even if you don’t have a template the view does get rendered but it uses the default Django skin. You’ll want to create your own to allow the user to logout AND to keep the look and feel of the site.
## Example
views.py
class myLogoutView(LogoutView):
pass
urls.py
path('logout_view/', views.myLogoutView.as_view(), name='logout_view'),
registrationlogged_out.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% trans ""Logged out"" %}
{% endblock %}
## Diagram
A visual representation of how `LogoutView` is derived can be seen here:
Image Link from CCBV YUML goes here
## Conclusion
I’m not sure how it could be much easier to implement a logout page.
1. Per Django Docs ↩︎
",2019-12-15,cbv-logoutview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/)
`LogoutView`
> > Log out the user and display the 'You are logged out' message.
## Attributes
* next_page: redirects the user on logout.
* [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the …
",CBV - LogoutView,https://www.ryancheley.com/2019/12/15/cbv-logoutview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeDoneView/)
`PasswordChangeDoneView`
> > Render a template. Pass keyword arguments from the URLconf to the context.
## Attributes
* template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site.
* title: the default uses the function `gettext_lazy()` and passes the string ‘Password change successful’. The function `gettext_lazy()` will translate the text into the local language if a translation is available. I’d just keep the default on this.
## Example
views.py
class myPasswordChangeDoneView(PasswordChangeDoneView):
pass
urls.py
path('password_change_done_view/', views.myPasswordChangeDoneView.as_view(), name='password_change_done_view'),
password_change_done.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% block title %}
{{ title }}
{% endblock %}
{% trans ""Password changed"" %}
{% endblock %}
settings.py
LOGIN_URL = '//login_view/'
The above assumes that have this set up in your `urls.py`
## Special Notes
You need to set the `URL_LOGIN` value in your `settings.py`. It defaults to
`/accounts/login/`. If that path isn’t valid you’ll get a 404 error.
## Diagram
A visual representation of how `PasswordChangeDoneView` is derived can be seen
here:

## Conclusion
Again, not much to do here. Let Django do all of the heavy lifting, but be
mindful of the needed work in `settings.py` and the new template you’ll
need/want to create
",2019-12-25,cbv-passwordchangedoneview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeDoneView/)
`PasswordChangeDoneView`
> > Render a template. Pass keyword arguments from the URLconf to the context.
## Attributes
* template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site.
* title: the default uses …
",CBV - PasswordChangeDoneView,https://www.ryancheley.com/2019/12/25/cbv-passwordchangedoneview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/)
`PasswordChangeView`
> > A view for displaying a form and rendering a template response.
## Attributes
* form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm`
* success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this. The default is to use Django’s but unless you have a top level `urls.py` has the name of `password_change_done` you’ll get an error.
* title: defaults to ‘Password Change’ and is translated into local language
## Example
views.py
class myPasswordChangeView(PasswordChangeView):
success_url = reverse_lazy('rango:password_change_done_view')
urls.py
path('password_change_view/', views.myPasswordChangeView.as_view(), name='password_change_view'),
password_change_form.html
{% extends ""base.html"" %}
{% load i18n %}
{% block content %}
{% block title %}
{{ title }}
{% endblock %}
{% trans ""Password changed"" %}
{% endblock %}
## Diagram
A visual representation of how `PasswordChangeView` is derived can be seen
here:

## Conclusion
The only thing to keep in mind here is the success_url that will most likely
need to be set based on the application you’ve written. If you get an error
about not being able to use `reverse` to find your template, that’s the issue.
",2019-12-22,cbv-passwordchangeview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/)
`PasswordChangeView`
> > A view for displaying a form and rendering a template response.
## Attributes
* form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm`
* success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this …
",CBV - PasswordChangeView,https://www.ryancheley.com/2019/12/22/cbv-passwordchangeview/
ryan,technology,"From [Classy Class Based
View](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/RedirectView/)
the `RedirectView` will
> > Provide a redirect on any GET request.
It is an extension of `View` and has 5 attributes:
* http_method_names (from `View`)
* pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used.
* permanent: a flag to determine if the redirect is permanent or not. If set to `True`, then the [HTTP Status Code](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#3xx_Redirection) [301](https://en.wikipedia.org/wiki/HTTP_301) is returned. If set to `False` the [302](https://en.wikipedia.org/wiki/HTTP_302) is returned
* query_string: If `True` then it will pass along the query string from the RedirectView. If it’s `False` it won’t. If this is set to `True` and neither `pattern\_name` nor `url` are set then nothing will be passed to the `RedirectView`
* url: Where the Redirect should point. It will take precedence over the patter_name so you should only `url` or `pattern\_name` but not both. This will need to be an absolute url, not a relative one, otherwise you may get a [404](https://en.wikipedia.org/wiki/HTTP_404) error
The example below will give a `301` status code:
class myRedirectView(RedirectView):
pattern_name = 'rango:template_view'
permanent = True
query_string = True
While this would be a `302` status code:
class myRedirectView(RedirectView):
pattern_name = 'rango:template_view'
permanent = False
query_string = True
## Methods
The method `get\_redirect\_url` allows you to perform actions when the
redirect is called. From the [Django
Docs](https://docs.djangoproject.com/en/2.2/ref/class-based-
views/base/#redirectview) the example given is increasing a counter on an
Article Read value.
## Diagram
A visual representation of how `RedirectView` derives from `View` 2

## Conclusion
In general, given the power of the url mapping in Django I’m not sure why you
would need to use a the Redirect View. From [Real
Python](https://docs.djangoproject.com/en/2.2/ref/class-based-
views/base/#redirectview) they concur, stating:
> > As you can see, the class-based approach does not provide any obvious
> benefit while adding some hidden complexity. That raises the question:
> **when should you use RedirectView?**
>>
>> If you want to add a redirect directly in your urls.py, using RedirectView
makes sense. But if you find yourself overwriting get _redirect_ url, a
function-based view might be easier to understand and more flexible for future
enhancements.
1. From the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-based-views/base/ ↩︎
2. Original Source from Classy Class Based Views ↩︎
",2019-11-10,cbv-redirectview,"From [Classy Class Based
View](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/RedirectView/)
the `RedirectView` will
> > Provide a redirect on any GET request.
It is an extension of `View` and has 5 attributes:
* http_method_names (from `View`)
* pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used.
* permanent: a …
",CBV - RedirectView,https://www.ryancheley.com/2019/11/10/cbv-redirectview/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/)
the `TemplateView` will
> > Render a template. Pass keyword arguments from the URLconf to the context.
It is an extended version of the `View` CBV with the the `ContextMixin` and
the `TemplateResponseMixin` added to it.
It has several attributes that can be set
* content_type: will allow you to define the MIME type that the page will return. The default is `DEFAULT\_CONTENT\_TYPE` but can be overridden with this attribute.
* extra_context: this can be used as a keyword argument in the `as\_view()` but not in the class of the CBV. Adding it there will do nothing
* http_method_name: derived from `View` and has the same definition
* response_classes: The response class to be returned by render_to_response method it defaults to a TemplateResponse. See below for further discussion
* template_engine: can be used to specify which template engine to use IF you have configured the use of multiple template engines in your `settings.py` file. See the [Usage](https://docs.djangoproject.com/en/2.2/topics/templates/#usage) section of the Django Documentation on Templates
* template_name: this attribute is required IF the method `get\_template\_names()` is not used.
## More on `response_class`
This confuses the ever living crap out of me. The best (only) explanation I
have found is by GitHub user `spapas` in his article [Django non-HTML
responses](https://spapas.github.io/2014/09/15/django-non-html-
responses/#rendering-to-non-html):
> > From the previous discussion we can conclude that if your non-HTML
> response needs a template then you just need to create a subclass of
> TemplateResponse and assign it to the response _class attribute (and also
> change the content_ type attribute). On the other hand, if your non-HTML
> respond does not need a template to be rendered then you have to override
> render _to_ response completely (since the template parameter does not need
> to be passed now) and either define a subclass of HttpResponse or do the
> rendering in the render _to_ response.
Basically, if you ever want to use a non-HTML template you’d set this
attribute, but it seems available mostly as a ‘just-in-case’ and not something
that’s used every day.
My advise … just leave it as is.
## When to use the `get` method
An answer which makes sense to me that I found on
[StackOverflow](https://stackoverflow.com/questions/35824904/django-view-get-
context-data-vs-get) was (slightly modified to make it more understandable)
> > if you need to have data available every time, use get_context_data(). If
> you need the data only for a specific request method (eg. in get), then put
> it in get.
## When to use the `get_template_name` method
This method allows you to easily change a template being used based on values
passed through GET.
This can be helpful if you want to have one template for a super user and
another template for a basic user. This helps to keep business logic out of
the template and in the view where it belongs.
This can also be useful if you want to specify several possible templates to
use. A list is passed and Django will work through that list from the first
element to the last until it finds a template that exists and render it.
If you don’t specify template_name you have to use this method.
## When to use the `get_context_data` method
See above in the section When to use the `get` method
## Diagram
A visual representation of how `TemplateView` derives from `View` 1

## Conclusion
If you want to roll your own CBV because you have a super specific use case,
starting at the `TemplateView` is going to be a good place to start. However,
you may find that there is already a view that is going to do what you need it
to. Writing your own custom implementation of `TemplateView` may be a waste of
time **IF** you haven’t already verified that what you need isn’t already
there.
1. Original Source from Classy Class Based Views ↩︎
",2019-11-03,cbv-template-view,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/)
the `TemplateView` will
> > Render a template. Pass keyword arguments from the URLconf to the context.
It is an extended version of the `View` CBV with the the `ContextMixin` and
the `TemplateResponseMixin` added to it.
It has several attributes that can be set
* content_type: will allow …
",CBV - Template View,https://www.ryancheley.com/2019/11/03/cbv-template-view/
ryan,technology,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/UpdateView/)
`UpdateView`
> > View for updating an object, with a response rendered by a template.
## Attributes
Two attributes are required to get the template to render. We’ve seen
`queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a
brief refresher
* fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields
* success_url: you’ll want to specify this after the record has been updated so that you know the update was made.
## Example
views.py
class myUpdateView(UpdateView):
queryset = Person.objects.all()
fields = '__all__'
extra_context = {
'type': 'Update'
}
success_url = reverse_lazy('rango:list_view')
urls.py
path('update_view/', views.myUpdateView.as_view(), name='update_view'),
\.html
{% block content %}
{{ type }} View
{% if type == 'Create' %}
{% endblock %}
## Diagram
A visual representation of how `UpdateView` is derived can be seen here:

## Conclusion
A simple way to implement a form to update data in a model. Step 3 for a CR
**U** D app is now complete!
",2019-12-08,cbv-updateview,"From [Classy Class Based
Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/UpdateView/)
`UpdateView`
> > View for updating an object, with a response rendered by a template.
## Attributes
Two attributes are required to get the template to render. We’ve seen
`queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a
brief refresher
* fields: specifies what fields from the …
",CBV - UpdateView,https://www.ryancheley.com/2019/12/08/cbv-updateview/
ryan,technology,"`View` is the ancestor of ALL Django CBV. From the great site [Classy Class
Based Views](http://ccbv.co.uk), they are described as
> > Intentionally simple parent class for all views. Only implements dispatch-
> by-method and simple sanity checking.
This is no joke. The `View` class has almost nothing to it, but it’s a solid
foundation for everything else that will be done.
Its implementation has just one attribute `http_method_names` which is a list
that allows you to specify what http verbs are allowed.
Other than that, there’s really not much to it. You just write a simple
method, something like this:
def get(self, _):
return HttpResponse('My Content')
All that gets returned to the page is a simple HTML. You can specify the
`content_type` if you just want to return JSON or plain text but defining the
content_type like this:
def get(self, _):
return HttpResponse('My Content', content_type='text plain')
You can also make the text that is displayed be based on a variable defined in
the class.
First, you need to define the variable
content = 'This is a {View} template and is not used for much of anything but '
'allowing extensions of it for other Views'
And then you can do something like this:
def get(self, _):
return HttpResponse(self.content, content_type='text/plain')
Also, as mentioned above you can specify the allowable methods via the
attribute `http_method_names`.
The following HTTP methods are allowed:
* get
* post
* put
* patch
* delete
* head
* options
* trace
By default all are allowed.
If we put all of the pieces together we can see that a really simple `View`
CBV would look something like this:
class myView(View):
content = 'This is a {View} template and is not used for much of anything but '
'allowing extensions of it for other Views'
http_method_names = ['get']
def get(self, _):
return HttpResponse(self.content, content_type='text/plain')
This `View` will return `content` to the page rendered as plain text. This CBV
is also limited to only allowing `get` requests.
Here’s what it looks like in the browser:

## Conclusion
`View` doesn’t do much, but it’s the case for everything else, so
understanding it is going to be important.
",2019-10-27,cbv-view,"`View` is the ancestor of ALL Django CBV. From the great site [Classy Class
Based Views](http://ccbv.co.uk), they are described as
> > Intentionally simple parent class for all views. Only implements dispatch-
> by-method and simple sanity checking.
This is no joke. The `View` class has almost nothing to it, but it’s a …
",CBV - View,https://www.ryancheley.com/2019/10/27/cbv-view/
ryan,technology,"As I’ve written about [previously](/my-first-project-after-completing-
the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a
pretty good spot (you should totally check it out over at
[StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be
a good time to learn a bit more about some of the ways that I’m rendering the
pages.
I’m using Class Based Views (CBV) and I realized that I really didn’t
[grok](https://en.wikipedia.org/wiki/Grok) how they worked. I wanted to change
that.
I’ll be working on a series where I deep dive into the CBV and work them from
several angles and try to get them to do all of the things that they are
capable of.
The first place I’d suggest anyone start to get a good idea of CBV, and the
idea of Mixins would be [SpaPas’ GitHub
Page](https://spapas.github.io/2018/03/19/comprehensive-django-cbv-guide/)
where he does a really good job of covering many pieces of the CBV. It’s a
great resource!
This is just the intro to this series and my hope is that I’ll publish one of
these pieces each week for the next several months as I work my way through
all of the various CBV that are available.
",2019-10-27,class-based-views,"As I’ve written about [previously](/my-first-project-after-completing-
the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a
pretty good spot (you should totally check it out over at
[StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be
a good time to learn a bit more about some of the ways that …
",Class Based Views,https://www.ryancheley.com/2019/10/27/class-based-views/
Ryan Cheley,pages,"This site is made using [Pelican](https://getpelican.com/) which is a
[Python](https://www.python.org/) [Static Site
Generator](https://en.wikipedia.org/wiki/Static_site_generator)
I use [Digital Ocean](https://www.digitalocean.com/) to host the site.
I have a
[Makefile](https://raw.githubusercontent.com/ryancheley/ryancheley.com/main/Makefile)
file that allows me to generate new posts. It also allows me to publish the
post.
I like [just](https://github.com/casey/just) more though as a command runner,
so there is also a
[justfile](https://raw.githubusercontent.com/ryancheley/ryancheley.com/main/justfile)
that is just a wrapper for the Makefile
I use a [SQLite database](https://sqlite.org/) hosted at
[Vercel](https://vercel.com/) which runs [datasette](https://datasette.io/) to
serve up search results. I wrote a custom Pelican Plugin called [pelican-to-
sqlite](https://pypi.org/project/pelican-to-sqlite/) to generate the content
of the entry for the SQLite database.
",2025-04-02,colophon,"This site is made using [Pelican](https://getpelican.com/) which is a
[Python](https://www.python.org/) [Static Site
Generator](https://en.wikipedia.org/wiki/Static_site_generator)
I use [Digital Ocean](https://www.digitalocean.com/) to host the site.
I have a
[Makefile](https://raw.githubusercontent.com/ryancheley/ryancheley.com/main/Makefile)
file that allows me to generate new posts. It also allows me to publish the
post.
I like [just](https://github.com/casey/just) more though as a command runner …
",Colophon,https://www.ryancheley.com/pages/colophon/
ryan,musings,"I've been thinking about communication ... a lot. How well people communicate
(or don't communicate) is what drives nearly every problem, either at work or
at home. Communication is essential to a feeling of **team** which can help to
avoid communication problems in the first place. Once you feel like you are on
a team, I think it's easier to engage in communication because you feel more
comfortable asking questions, posing challenges when needed, and generally
being happier with your surroundings.
I'm almost finished with [Atul
Gawande's](https://en.wikipedia.org/wiki/Atul_Gawande) book [The Checklist
Manifesto](https://en.m.wikipedia.org/wiki/The_Checklist_Manifesto) and what
struck me the most about it was the fact that checklists used by pilots,
construction crews, and surgeons all had one thing in common. They **forced**
communication amongst disparate people helping to start the formation of bonds
that lead to a team.
Whether constructing a 32 floor high rise building, flying a 747 or performing
open heart surgery, these are all complex problems and they all have
checklists.
The use of these checklists help the practitioners focus on what's important
by using the checklist to remind them of what needs to be done but is easily
forgotten.
All of this is interesting, but you can get to a 'so what' or 'and ...' point.
While reading [Data silos holding back healthcare breakthroughs,
outcomes](http://www.healthdatamanagement.com/news/data-silos-holding-back-
healthcare-breakthroughs-outcomes?brief=00000152-14ad-d1cc-a5fa-7cff19540000)
this line caught my attention:
> > However, the MIT researchers contend that the health data divide can be
> narrowed by creating a culture of collaboration between clinicians and data
> scientists
Here's the 'so what' point of all of this. Using **Big Data** to help patients
should be what the healthcare industry is focusing on. But this is difficult
because Clinicians and Data Scientists don't always have the vocabulary nor
the incentives to collaborate in a meaningful way that leads to improved
patient outcomes.
Could check lists for implementing **Big Data** at various types and sizes of
organizations help? I think so, because at the very least, it could start the
necessary conversations needed to engender a sense of **team** between
Clinicians and Data Scientists which can be sorely lacking in many
institutions.
",2017-01-14,communication-and-checklists,"I've been thinking about communication ... a lot. How well people communicate
(or don't communicate) is what drives nearly every problem, either at work or
at home. Communication is essential to a feeling of **team** which can help to
avoid communication problems in the first place. Once you feel like you …
",Communication and Checklists,https://www.ryancheley.com/2017/01/14/communication-and-checklists/
ryan,microblog,"Work has been a bit hectic recently which has really cut into some of my open
source(ish) community participation, at least the ""in person"" ones. I've not
been able to attend a DSF Office hour, or had a chance to do my writing
session, or go to Jeff's Office Hours for a few weeks.
Today was looking like I would miss Jeff's Office Hours again, but I realized
that if I could go, even for 30 minutes, I should.
I didn't realize before hand how worth it the experience would be. I was only
there for about 30 minutes, but it was such a great experience to see some
people I hadn't seen in some while, and to talk a bit about hockey and Python
and just generally listen to my friend banter about various things.
These types of community are so necessary and so rejuvenating for me. I need
to remember this. Work will be hectic for the foreseeable future ... as with
everything, there's too much to do, and not enough time to do it in.
I will most likely forget this again, until I remember it, but hopefully I can
work hard to stay engaged in the ways that are helpful and needed for me.
",2025-02-21,community,"Work has been a bit hectic recently which has really cut into some of my open
source(ish) community participation, at least the ""in person"" ones. I've not
been able to attend a DSF Office hour, or had a chance to do my writing
session, or go to Jeff's Office …
",Community,https://www.ryancheley.com/2025/02/21/community/
ryan,technology,"I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung
around for the
sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was
particularly interested in working on open tickets related to the ORM. It so
happened that [Simon Charette](https://github.com/charettes) was at Django Con
and was able to meet with several of us to talk through the inner working of
the ORM.
With Simon helping to guide us, I took a stab at an open ticket and settled on
[10070](https://code.djangoproject.com/ticket/10070). After reviewing it on my
own, and then with Simon, it looked like it wasn't really a bug anymore, and
so we agreed that I could mark it as
[done](https://code.djangoproject.com/ticket/10070#comment:22).
Kind of anticlimactic given what I was **hoping** to achieve, but a closed
ticket is a closed ticket! And so I [tweeted out my
accomplishment](https://twitter.com/ryancheley/status/1583206004744867841) for
all the world to see.
A few weeks later though, a
[comment](https://code.djangoproject.com/ticket/10070#comment:22) was added
that it actually was still a bug and it was reopened.
I was disappointed ... but I now had a chance to actually fix a real bug! [I
started in earnest](https://github.com/ryancheley/public-
notes/issues/1#issue-1428819941).
A suggestion / pattern for working through learning new things that [Simon
Willison](https://simonwillison.net) had mentioned was having a `public-notes`
repo on GitHub. He's had some great stuff that he's worked through that you
can see [here](https://github.com/simonw/public-notes/issues?q=is%3Aissue).
Using this as a starting point, I decided to [walk through what I learned
while working on this open ticket](https://github.com/ryancheley/public-
notes/issues/1).
Over the course of 10 days I had a 38 comment 'conversation with myself' and
it was **super** helpful!
A couple of key takeaways from working on this issue:
* [Carlton Gibson](https://github.com/carltongibson) [said](https://overcast.fm/+QkIrhujD0/21:00) essentially once you start working a ticket from [Trac](https://code.djangoproject.com/), you are the world's foremost export on that ticket ... and he's right!
* ... But, you're not working the ticket alone! During the course of my work on the issue I had help from [Simon Charette](https://github.com/charettes), [Mariusz Felisiak](https://github.com/felixxm), [Nick Pope](https://github.com/ngnpope), and [Shai Berger](https://github.com/shaib)
* The ORM can seem big and scary ... but remember, it's _just_ Python
I think that each of these lesson learned is important for anyone thinking of
contributing to Django (or other open source projects).
That being said, the last point is one that I think can't be emphasized
enough.
The ORM has a reputation for being this big black box that only 'really smart
people' can understand and contribute to. But, it really is _just_ Python.
If you're using Django, you know (more likely than not) a little bit of
Python. Also, if you're using Django, and have written **any** models, you
have a conceptual understanding of what SQL is trying to do (well enough I
would argue) that you can get in there AND make sense of what is happening.
And if you know a little bit of Python a great way to learn more is to get
into a project like Django and try to fix a bug.
[My initial solution](https://code.djangoproject.com/ticket/10070#comment:27)
isn't [the final one that got
merged](https://github.com/django/django/pull/16243) ... it was a
collaboration with 4 people, 2 of whom I've never met in real life, and the
other 2 I only just met at DjangoCon US a few weeks before.
While working through this I learned just as much from the feedback on my code
as I did from trying to solve the problem with my own code.
All of this is to say, contributing to open source can be hard, it can be
scary, but honestly, I can't think of a better place to start than Django, and
there are [lots of places to
start](https://code.djangoproject.com/query?owner=nobody&status=assigned&status=new&col=id&col=summary&col=owner&col=status&col=component&col=type&col=version&desc=1&order=id).
And for those of you feeling a bit adventurous, there are plenty of
[ORM](https://code.djangoproject.com/query?status=assigned&status=new&owner=nobody&component=Database+layer+\(models%2C+ORM\)&col=id&col=summary&col=status&col=component&col=owner&col=type&col=version&desc=1&order=id)
tickets just waiting for you to try and fix them!
",2022-11-12,contributing-to-django,"I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung
around for the
sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was
particularly interested in working on open tickets related to the ORM. It so
happened that [Simon Charette](https://github.com/charettes) was at Django Con
and was able to meet with several of us to talk through …
",Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug,https://www.ryancheley.com/2022/11/12/contributing-to-django/
ryan,technology,"Last Saturday (July 3rd) while on vacation, I dubbed it “Security update
Saturday”. I took the opportunity to review all of the GitHub bot alerts about
out of date packages, and make the updates I needed to.
This included updated `django-sql-dashboard` to [version
1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which
I was really excited about doing. It included two things I was eager to see:
1. Implemented a new column cog menu, with options for sorting, counting distinct items and counting by values. [#57](https://github.com/simonw/django-sql-dashboard/issues/57)
2. Admin change list view now only shows dashboards the user has permission to edit. Thanks, [Atul Varma](https://github.com/atverma). [#130](https://github.com/simonw/django-sql-dashboard/issues/130)
I made the updates on my site StadiaTracker.com using my normal workflow:
1. Make the change locally on my MacBook Pro
2. Run the tests
3. Push to UAT
4. Push to PROD
The next day, on July 4th, I got the following error message via my error
logging:
Internal Server Error: /dashboard/games-seen-in-person/
ProgrammingError at /dashboard/games-seen-in-person/
could not find array type for data type information_schema.sql_identifier
So I copied the [url](https://stadiatracker.com/dashboard/games-seen-in-
person/) `/dashboard/games-seen-in-person/` to see if I could replicate the
issue as an authenticated user and sure enough, I got a 500 Server error.
## Troubleshooting process
The first thing I did was to fire up the local version and check the url
there. Oddly enough, it worked without issue.
OK … well that’s odd. What are the differences between the local version and
the uat / prod version?
The local version is running on macOS 10.15.7 while the uat / prod versions
are running Ubuntu 18.04. That could be one source of the issue.
The local version is running Postgres 13.2 while the uat / prod versions are
running Postgres 10.17
OK, two differences. Since the error is `could not find array type for data
type information_schema.sql_identifier` I’m going to start with taking a look
at the differences on the Postgres versions.
First, I looked at the [Change Log](https://github.com/simonw/django-sql-
dashboard/releases) to see what changed between version 0.16 and version 1.0.
Nothing jumped out at me, so I looked at the
[diff](https://github.com/simonw/django-sql-dashboard/compare/acb3752..b8835)
between several files between the two versions looking specifically for
`information_schema.sql_identifier` which didn’t bring up anything.
Next I checked for either `information_schema` or `sql_identifier` and found a
chance in the `views.py` file. On line 151 (version 0.16) this change was
made:
string_agg(column_name, ', ' order by ordinal_position) as columns
to this:
array_to_json(array_agg(column_name order by ordinal_position)) as columns
Next, I extracted the entire SQL statement from the `views.py` file to run in
Postgres on the UAT server
with visible_tables as (
select table_name
from information_schema.tables
where table_schema = 'public'
order by table_name
),
reserved_keywords as (
select word
from pg_get_keywords()
where catcode = 'R'
)
select
information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from
information_schema.columns
join
visible_tables on
information_schema.columns.table_name = visible_tables.table_name
where
information_schema.columns.table_schema = 'public'
group by
information_schema.columns.table_name
order by
information_schema.columns.table_name
Running this generated the same error I was seeing from the logs!
Next, I picked apart the various select statements, testing each one to see
what failed, and ended on this one:
select information_schema.columns.table_name,
array_to_json(array_agg(column_name order by ordinal_position)) as columns
from information_schema.columns
Which generated the same error message. Great!
In order to determine how to proceed next I googled `sql_identifier` to see
what it was. Turns out it’s a field type in Postgres! (I’ve been working in
MSSQL for more than 10 years and as far as I know, this isn’t a field type
over there, so I learned something)
Further, there were [changes made to that field type in Postgres
12](https://bucardo.org/postgres_all_versions#version_12.0)!
OK, since there were changes made to that afield type in Postgres 12, I’ll
probably need to cast the field to another field type that won’t fail.
That led me to try this:
select information_schema.columns.table_name,
array_to_json(array_agg(cast(column_name as text) order by ordinal_position)) as columns
from information_schema.columns
Which returned a value without error!
## Submitting the updated code
With the solution in hand, I read the [Contribution
Guide](https://github.com/simonw/django-sql-
dashboard/blob/main/docs/contributing.md) and submitting my patch. And the
most awesome part? Within less than an hour Simon Willison (the project’s
maintainer) had replied back and merged by code!
And then, the icing on the cake was getting a [shout out in a post that Simon
wrote](https://simonwillison.net/2021/Jul/6/django-sql-dashboard/) up about
the update that I submitted!
Holy smokes that was sooo cool.
I love solving problems, and I love writing code, so this kind of stuff just
really makes my day.
Now, I’ve contributed to an open source project (that makes 3 now!) and the
issue with the `/dashboard/` has been fixed.
All
",2021-07-09,contributing-to-django-sql-dashboard,"Last Saturday (July 3rd) while on vacation, I dubbed it “Security update
Saturday”. I took the opportunity to review all of the GitHub bot alerts about
out of date packages, and make the updates I needed to.
This included updated `django-sql-dashboard` to [version
1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which
I was really excited …
",Contributing to django-sql-dashboard,https://www.ryancheley.com/2021/07/09/contributing-to-django-sql-dashboard/
ryan,technology,"I read about a project called
[Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was
[tweeted about by Jeff
Triplet](https://twitter.com/webology/status/1414233648534933509)
I checked it out and it seemed interesting. I decided to use it on my
[simplest Django project](https://doestatisjrhaveanerrortoday.com) just to
give it a test drive running this command:
tryceratops .
and got this result:
Done processing! 🦖✨
Processed 16 files
Found 0 violations
Failed to process 1 files
Skipped 2340 files
This is nice, but what is the file that failed to process?
This left me with two options:
1. Complain that this awesome tool created by someone didn't do the thing I thought it needed to do
OR
1. Submit an issue to the project and offer to help.
I went with option 2 😀
My initial commit was made in a pretty naive way. It did the job, but not in
the best way for maintainability. I had a really great exchange with the
maintainer [Guilherme Latrova](https://github.com/guilatrova) about the change
that was made and he helped to direct me in a different direction.
The biggest thing I learned while working on this project (for Python at
least) was the `logging` library. Specifically I learned how to add:
* a formatter
* a handler
* a logger
For my change, I added a simple format with a verbose handler in a custom
logger. It looked something like this:
The formatter:
""simple"": {
""format"": ""%(message)s"",
},
The handler:
""verbose_output"": {
""class"": ""logging.StreamHandler"",
""level"": ""DEBUG"",
""formatter"": ""simple"",
""stream"": ""ext://sys.stdout"",
},
The logger:
""loggers"": {
""tryceratops"": {
""level"": ""INFO"",
""handlers"": [
""verbose_output"",
],
},
},
This allows the `verbose` flag to output the message to Standard Out and give
and `INFO` level of detail.
Because of what I learned, I've started using the [logging
library](https://docs.python.org/3/library/logging.html) on some of my work
projects where I had tried to roll my own logging tool. I should have known
there was a logging tool in the Standard Library BEFORE I tried to roll me own
🤦🏻♂️
The other thing I (kind of) learned how to do was to squash my commits. I had
never had a need (or desire?) to squash commits before, but the commit message
is what Guilherme uses to generate the change log. So, with his guidance and
help I tried my best to squash those commits. Although in the end he had to do
it (still not entiredly sure what I did wrong) I was exposed to the idea of
squashing commits and why they might be done. A win-win!
The best part about this entire experience was getting to work with Guilherme
Latrova. He was super helpful and patient and had great advice without telling
me what to do. The more I work within the Python ecosystem the more I'm just
blown away by just how friendly and helpful everyone is and it's what make me
want to do these kinds of projects.
If you haven't had a chance to work on an open source project, I highly
recommend it. It's a great chance to learn and to meet new people.
",2021-08-07,contributing-to-tryceratops,"I read about a project called
[Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was
[tweeted about by Jeff
Triplet](https://twitter.com/webology/status/1414233648534933509)
I checked it out and it seemed interesting. I decided to use it on my
[simplest Django project](https://doestatisjrhaveanerrortoday.com) just to
give it a test drive running this command:
tryceratops .
and got this result …
",Contributing to Tryceratops,https://www.ryancheley.com/2021/08/07/contributing-to-tryceratops/
ryan,musings,"# Converting Writing Examples from doc to markdown: My Process
All of my writing examples were written while attending the [University of
Arizona](http://www.arizona.edu) when I was studying Economics.
These writing examples are from 2004 and were written in either [Microsoft
Word](https://en.wikipedia.org/wiki/Microsoft_Word) OR the [OpenOffice
Writer](https://en.wikipedia.org/wiki/OpenOffice.org)
Before getting the files onto [Github](https://github.com/miloardot/) I wanted
to convert them into [markdown](https://en.wikipedia.org/wiki/Markdown) so
that they would be in plain text.
I did this mostly as an exercise to see if I could, but in going through it
I'm glad I did. Since the files were written in .doc format, and the
[.doc](https://en.wikipedia.org/wiki/Doc_\(computing\)) format has been
replaced with the [.docx](https://en.wikipedia.org/wiki/Office_Open_XML)
format it could be that at some point my work would be inaccessible. Now, I
don't have to worry about that.
So, how did I get from a .doc file written in 2004 to a converted markdown
file created in 2016? Here's how:
## Round 1
1. Downloaded the Doc files from my Google Drive to my local Desktop and saved them into a folder called `Summaries`
2. Each week of work had it's own directory, so I had to go into each directory individually (not sure how to do recursive work _yet_ )
3. Each of the files was written in 2004 so I had to change the file types from .doc to .docx. This was accomplished with this command: `textutil -convert docx *.doc`
4. Once the files were converted from .doc to .docx I ran the following commands:
1. `cd ../yyyymmdd` where yyyy = YEAR, mm = Month in 2 digits; dd = day in 2 digits
2. `for f in *\ *; do mv ""$f"" ""${f// /_}""; done` [\^1](http://stackoverflow.com/questions/2709458/bash-script-to-replace-spaces-in-file-names)\- this would replace the space character with an underscore. this was needed so I could run the next command
3. `for file in $(ls *.docx); do pandoc -s -S ""${file}"" -o ""${file%docx}md""; done` [\^2](http://stackoverflow.com/questions/11023543/recursive-directory-parsing-with-pandoc-on-mac) \- this uses pandoc to convert the docx file into valid markdown files
4. `mv *.md ../` \- used to move the .md files into the next directory up
5. With that done I just needed to move the files from my `Summaries` directory to my `writing-examples` github repo. I'm using the GUI for this so I have a folder on my desktop called `writing-examples`. To move them I just used `mv Summaries/*.md writing-examples/`
So that's it. Nothing **too** fancy, but I wanted to write it down so I can
look back on it later and know what the heck I did.
## Round 2
The problem I'm finding is that the bulk conversion using `textutil` isn't
keeping the footnotes from the original .doc file. These are important though,
as they reference the original work. Ugh!
Used this command [\^5](http://stackoverflow.com/questions/2709458/bash-
script-to-replace-spaces-in-file-names) to recursively replace the spaces in
the files names with underscores:
`find . -depth -name '* *' \ | while IFS= read -r f ; do mv -i ""$f"" ""$(dirname
""$f"")/$(basename ""$f""|tr ' ' _)"" ; done`
Used this command
[\^3](http://hints.macworld.com/article.php?story=20060309220909384) to
convert all of the .doc to .docx at the same time
`find . -name *.doc -exec textutil -convert docx '{}' \;`
Used this command [\^4](https://gist.github.com/bzerangue/2504041) to generate
the markdown files recursively:
`find . -name ""*.docx"" | while read i; do pandoc -f docx -t markdown ""$i"" -o
""${i%.*}.md""; done;`
Used this command to move the markdown files:
Never figured out what to do here :(
## Round 3
OK, I just gave up on using `textutil` for the conversion. It wasn't keeping
the footnotes on the conversion from .doc to .docx.
Instead I used the [Google Drive](https://drive.google.com/) add in [Converter
for Google Drive Document](https://www.driveconverter.com). It converted the
.doc to .docx **AND** kept the footnotes like I wanted it to.
Of course, it output all of the files to the same directory, so the work I did
to get the recursion to work previously can't be applied here **sigh**
Now, the only commands to run from the terminal are the following:
1. `for f in *\ *; do mv ""$f"" ""${f// /_}""; done` [^1]- this would replace the space character with an underscore. this was needed so I could run the next command
2. `for file in $(ls *.docx); do pandoc -s -S ""${file}"" -o ""${file%docx}md""; done` [^2] - this uses pandoc to convert the docx file into valid markdown files
3. `mv *.md `
## Round 4
Like any ~~good~~ ~~bad~~ lazy programmer I've opted for a brute force method
of converting the `doc` files to `docx` files. I opened each one in Word on
macOS and saved as `docx`. Problem solved ¯_(ツ)_/¯
Step 1: used the command I found here
[\^7](http://stackoverflow.com/questions/2709458/bash-script-to-replace-
spaces-in-file-names) to recursively replace the spaces in the files names
with underscores `_`
> `find . -depth -name '* *' \` `| while IFS= read -r f ; do mv -i ""$f""
> ""$(dirname ""$f"")/$(basename ""$f""|tr ' ' _)"" ; done`
Step 2: Use the command found here
[\^6](https://gist.github.com/bzerangue/2504041) to generate the markdown
files recursively:
`find . -name ""*.docx"" | while read i; do pandoc -f docx -t markdown ""$i"" -o
""${i%.*}.md""; done;`
Step 3: Add the files to my GitHub repo `graduate-writing-examples`
For this I used the GitHub macOS Desktop App to create a repo in my Documents
directory, so it lives in `~/Documents/graduate-writing-examples/`
I then used the finder to locate all of the `md` files in the `Summaries`
folder and then dragged them into the repo. There were 2 files with the same
name `Rose_Summary` and `Libecap_and_Johnson_Summary`. While I'm sure that I
could have figured out how to accomplish this with the command line, this took
less than 1 minute, and I had just spent 5 minutes trying to find a terminal
command to do it. Again, the lazy programmer wins.
Once the files were in the local repo I committed the files and _boom_ they
were in my [GitHub Writing Examples](https://github.com/miloardot/graduate-
writing-examples) repo.
",2016-10-07,converting-writing-examples-from-doc-to-markdown-my-process,"# Converting Writing Examples from doc to markdown: My Process
All of my writing examples were written while attending the [University of
Arizona](http://www.arizona.edu) when I was studying Economics.
These writing examples are from 2004 and were written in either [Microsoft
Word](https://en.wikipedia.org/wiki/Microsoft_Word) OR the [OpenOffice
Writer](https://en.wikipedia.org/wiki/OpenOffice.org)
Before getting the files onto [Github …](https://github.com/miloardot/)
",Converting Writing Examples from doc to markdown: My Process,https://www.ryancheley.com/2016/10/07/converting-writing-examples-from-doc-to-markdown-my-process/
ryan,microblog,"This will be one of those frustrating blog posts where I'll wave my hands
about the code that I wrote but not actually be able to post it because I did
it for work.
A very specific (to my department) challenge we have is that we use a tool
called [Crush FTP](https://www.crushftp.com/index.html) to automate several
things. This automation is mostly around file movement, and file renaming.
Because this tool has permissions which are higher than my team and I, we have
to work with our IT team in order to set up various jobs. The IT team is
always really responsive when we need to make a change, or check on something,
but I really wanted to have an ability to be able to have my own documentation
to be able to answer questions about the jobs.
I recently discovered that each job can be exported out as an XML file, and
while XML has a very 'thar be dragons' vibe to it, these XML files were
**mostly** fine. I say mostly because there is a node called that has all
manner of problematic text in it that causes all sorts of parsing failures. To
'fix' this I simply remove the content from that node and replace it with
placeholder text (for now).
The final product will output plain text with details about each job, each
task in that job, and then a mermaid diagram of the flow at the bottom.
This is pretty much everything that my team and I need to have the
documentation to answer questions.
Some future improvements I'd like to implement are:
1. Automation of the XML file generation from the Crush FTP application
2. Automatic writing of the documentation to our Knowledge Management System1
3. Write the data to a SQLite database so I can leverage [datasette](https://datasette.io/) to help clean up names, and various attributes of the tasks and jobs
1. We use [YouTrack](https://www.jetbrains.com/youtrack/) which is a really good Jira / Confluence replacement if you're looking for one ↩︎
",2025-02-13,creating-documentation-from-an-xml-file-using-python,"This will be one of those frustrating blog posts where I'll wave my hands
about the code that I wrote but not actually be able to post it because I did
it for work.
A very specific (to my department) challenge we have is that we use a tool
called …
",Creating Documentation from an XML file using Python,https://www.ryancheley.com/2025/02/13/creating-documentation-from-an-xml-file-using-python/
ryan,technology,"Creating meaningful, long #hastags can be a pain in the butt.
There you are, writing up a witty tweet or making that perfect caption for
your instagram pic and you realize that you have a fantastic idea for a hash
tag that is more of a sentence than a single word.
You proceed to write it out and unleash your masterpiece to the world and just
as you hit the submit button you notice that you have a typo, or the wrong
spelling of a word and #ohcrap you need to delete and retweet!
That lead me to write a [Drafts](https://getdrafts.com) Action to take care of
that.
I’ll leave [others to write about the virtues of
Drafts](https://www.macstories.net/reviews/drafts-5-the-macstories-review/),
but it’s fantastic.
The Action I created has two steps: (1) to run some JavaScript and (2) to copy
the contents of the draft to the Clipboard. You can get my action
[here](https://actions.getdrafts.com/a/1Uo).
Here’s the JavaScript that I used to take a big long sentence and turn it into
a social media worthy hashtag
var contents = draft.content;
var newContents = ""#"";
editor.setText(newContents+contents.replace(/ /g, """").toLowerCase());
Super simple, but holy crap does it help!
",2019-03-30,creating-hastags-for-social-media-with-a-drafts-action,"Creating meaningful, long #hastags can be a pain in the butt.
There you are, writing up a witty tweet or making that perfect caption for
your instagram pic and you realize that you have a fantastic idea for a hash
tag that is more of a sentence than a single …
",Creating Hastags for Social Media with a Drafts Action,https://www.ryancheley.com/2019/03/30/creating-hastags-for-social-media-with-a-drafts-action/
ryan,technology,"I’ve mentioned before that I have been working on getting the hummingbird
video upload automated.
Each time I thought I had it, and each time I was wrong.
For some reason I could run it from the command line without issue, but when
the cronjob would try and run it ... nothing.
Turns out, it was running, it just wasn’t doing anything. And that was my
fault.
The file I had setup in cronjob was called `run_scrip.sh`
At first I was confused because the script was suppose to be writing out to a
log file all of it’s activities. But it didn’t appear to.
Then I noticed that the log.txt file it was writing was in the main `\``
directory. That should have been my first clue.
I kept trying to get the script to run, but suddenly, in a blaze of glory,
realized that it **was** running, it just wasn’t doing anything.
And it wasn’t doing anything for the same reason that the log file was being
written to the `\`` directory.
All of the paths were relative instead of absolute, so when the script ran the
command `./create_mp4.sh` it looks for that script in the home directory,
didn’t find it, and moved on.
The fix was simple enough, just add absolute paths and we’re golden.
That means my `run_script.sh` goes from this:
# Create the script that will be run
./create_script.sh
echo ""Create Shell Script: $(date)"" >> log.txt
# make the script that was just created executable
chmod +x /home/pi/Documents/python_projects/create_mp4.sh
# Create the script to create the mp4 file
/home/pi/Documents/python_projects/create_mp4.sh
echo ""Create MP4 Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# upload video to YouTube.com
/home/pi/Documents/python_projects/upload.sh
echo ""Uploaded Video to YouTube.com: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# Next we remove the video files locally
rm /home/pi/Documents/python_projects/*.h264
echo ""removed h264 files: $(date)"" >> /home/pi/Documents/python_projects/log.txt
rm /home/pi/Documents/python_projects/*.mp4
echo ""removed mp4 file: $(date)"" >> /home/pi/Documents/python_projects/log.txt
To this:
# change to the directory with all of the files
cd /home/pi/Documents/python_projects/
# Create the script that will be run
/home/pi/Documents/python_projects/create_script.sh
echo ""Create Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# make the script that was just created executable
chmod +x /home/pi/Documents/python_projects/create_mp4.sh
# Create the script to create the mp4 file
/home/pi/Documents/python_projects/create_mp4.sh
echo ""Create MP4 Shell Script: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# upload video to YouTube.com
/home/pi/Documents/python_projects/upload.sh
echo ""Uploaded Video to YouTube.com: $(date)"" >> /home/pi/Documents/python_projects/log.txt
# Next we remove the video files locally
rm /home/pi/Documents/python_projects/*.h264
echo ""removed h264 files: $(date)"" >> /home/pi/Documents/python_projects/log.txt
rm /home/pi/Documents/python_projects/*.mp4
echo ""removed mp4 file: $(date)"" >> /home/pi/Documents/python_projects/log.txt
I made this change and then started getting an error about not being able to
access a `json` file necessary for the upload to
[YouTube](https://www.youtube.com). Sigh.
Then while searching for what directory the cronjob was running from I found
[this very simple](https://unix.stackexchange.com/questions/38951/what-is-the-
working-directory-when-cron-executes-a-job) idea. The response was, why not
just change it to the directory you want. 🤦♂️
I added the `cd` to the top of the file:
# change to the directory with all of the files
cd /home/pi/Documents/python_projects/
Anyway, now it works. Finally!
Tomorrow will be the first time (unless of course something else goes wrong)
that The entire process will be automated. Super pumped!
",2018-04-10,cronjob-finally,"I’ve mentioned before that I have been working on getting the hummingbird
video upload automated.
Each time I thought I had it, and each time I was wrong.
For some reason I could run it from the command line without issue, but when
the cronjob would try and run …
",Cronjob ... Finally,https://www.ryancheley.com/2018/04/10/cronjob-finally/
ryan,technology,"After **days** of trying to figure this out, I finally got the video to upload
via a cronjob.
There were 2 issues.
## Issue the first
Finally found the issue. [Original script from YouTube developers
guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had
this:
CLIENT_SECRETS_FILE = ""client_secrets.json""
And then a couple of lines later, this:
% os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
When `crontab` would run the script it would run from a path that wasn’t where
the `CLIENT_SECRETS_FILE` file was and so a message would be displayed:
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
What I needed to do was to update the `CLIENT_SECRETS_FILE` to be the whole
path so that it could always find the file.
A simple change:
CLIENT_SECRETS_FILE = os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
## Issue the second
When the `create_mp4.sh` script would run it was reading all of the `h264`
files from the directory where they lived **BUT** they were attempting to
output the `mp4` file to `/` which it didn’t have permission to write to.
This was failing silently (I’m still not sure how I could have caught the
error). Since there was no `mp4` file to upload that script was failing
(though it was true that the location of the `CLIENT_SECRETS_FILE` was an
issue).
What I needed to do was change the `create_mp4.sh` file so that when the
MP4Box command output the `mp4` file to the proper directory. The script went
from this:
(echo '#!/bin/sh'; echo -n ""MP4Box""; array=($(ls ~/Documents/python_projects/*.h264)); for index in ${!array[@]}; do if [ ""$index"" -eq 0 ]; then echo -n "" -add ${array[index]}""; else echo -n "" -cat ${array[index]}""; fi; done; echo -n "" hummingbird.mp4"") > create_mp4.sh
To this:
(echo '#!/bin/sh'; echo -n ""MP4Box""; array=($(ls ~/Documents/python_projects/*.h264)); for index in ${!array[@]}; do if [ ""$index"" -eq 0 ]; then echo -n "" -add ${array[index]}""; else echo -n "" -cat ${array[index]}""; fi; done; echo -n "" /home/pi/Documents/python_projects/hummingbird.mp4"") > /home/pi/Documents/python_projects/create_mp4.sh
The last bit `/home/pi/Documents/python_projects/create_mp4.sh` may not be
_necessary_ but I’m not taking any chances.
The [video posted tonight](https://www.youtube.com/watch?v=OaRiW1aFk9k) is the
first one that was completely automatic!
Now … if I could just figure out how to automatically fill up my hummingbird
feeder.
",2018-04-20,cronjob-redux,"After **days** of trying to figure this out, I finally got the video to upload
via a cronjob.
There were 2 issues.
## Issue the first
Finally found the issue. [Original script from YouTube developers
guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had
this:
CLIENT_SECRETS_FILE = ""client_secrets.json""
And then a couple of lines later, this:
% os.path …
",Cronjob Redux,https://www.ryancheley.com/2018/04/20/cronjob-redux/
Ryan Cheley,pages,"# R. RYAN CHELEY
### July 2016 - Present
**Senior Regional Director Business Informatics, DESERT OASIS HEALTHCARE**
Lead strategic development and implementation of Business Information systems
and reporting processes across multiple Heritage Provider Network companies,
including Desert Oasis Healthcare, Arizona Priority Care, and Heritage Victor
Valley Medical Group.
Direct a team managing critical healthcare operations through custom web
applications, dashboards, and automated reporting solutions. Key areas include
health plan authorizations, claims processing, and encounter reporting.
Drive operational efficiency through technology optimization, workflow
improvements, and data-driven decision making.
**Leadership & Team Development**
* Led cross-state team of 15 employees across multiple locations, promoting 2 staff to management roles
* Implemented career development initiatives including management tracks and IC growth paths
* Maintained 90%+ employee satisfaction since 2021 through regular 1:1 meetings and mentorship
**Technical Leadership & Process Improvement**
* Modernized development practices by implementing Agile for web development, Kanban for reporting teams, and migrating from Subversion to Git
* Architected and deployed dimensional data warehouse, enhancing data integrity and analytics capabilities
* Automated 837P/837I file processing
* Reduced outbound 837 error rate by 89% for Institutional claims and 96.4% for Professional claims
* Established Data Governance Committee for Desert Oasis Healthcare
**Healthcare Systems Implementation**
* Developed comprehensive suite of web-based workflow solutions including Authorization Queue, COVID Tracking, Claims Management, and Provider Management systems
* Implemented enterprise-wide reporting solutions using SSRS and Tableau across multiple facilities
**Recognition**
* Received Administrative Department of the Year awards (2017, 2019)
* Awarded KLAS Research Points of Light Award for Provider/Payer Collaboration (2023)
### February 2012 - July 2016
**Director - NextGen Support Services, DESERT OASIS HEALTHCARE**
Responsible for the design, development and implementation of business and
clinical information processes; Work closely with physicians, clinical staff
and business operational leadership to provide detailed requirements, develop
solutions, provide end-to-end project management and on-going technical
support. Manage staff, including coordinating cross-functional teams and
resources.
* Lead Migration of Large Enterprise EHR to new Data Center
* Lead Large Client NextGen Enterprise on successful upgrade path to Meaningful Use Compliant EHR versions every year from 2012 - 2015
* Received Peer Award for Most Innovate (2015)
* Promoted from Supervisor of one team in NextGen Department in 2012 to Director of Department by 2015
* Went from managing 3 staff to 6 staff (including 1 manager)
### November 2008 – February 2012
**Project Analyst, DESERT OASIS HEALTHCARE**
Responsible for the maintenance of the Member Roster for the Living & Aging
Well Program; Developed Reports for tracking provider/user productivity;
Development of Managed Care Custom Templates in the NextGen EHR
* Converted Member Roster from Excel format with no reporting capabilities to MS Access Database with rich feature set of reporting that allowed Medical Director to make quicker, more informed decisions
* Received Peer Award for Most Dependable (2011)
* Received Administrative Services Star of the Month (i.e. Employee of the Month) July 2010
* Created Managed Care Templates in NextGen EHR that facilitated the integration of Primary Care Medical Records with Case Management Records
### March 2006 – November 2008
**Web Services Project Manager GRAPHTEK ADVERTISING AND DESIGN**
Responsible for the planning and implementation of Proprietary Web CMS for
Client Websites; Coordinating with Client Marketing Managers, IT and
Accounting personnel to ensure web site projects are completed in a timely
fashion; Client training on CMS
* Successfully implemented over 100 client websites on Custom Content Management System
* Introduced Post-Mortem Analysis of Client implementations which lead to a decrease in both cost and time to implement
### March 2005 – April 2006, Analyst
**Revenue Management, JETBLUE AIRWAYS**
Setting inventory levels for six markets with twenty-four to thirty-four daily
flights; Analyze market performance against metrics set by manager, including
year-over-year and forecast comparisons
* Increased Revenue Per Available Seat Mile in each of my assigned Markets Year-over-Year
### August 2001 – November 2002
**Global Support Analyst BARRA, INC.**
Responsible for providing technical support to Institutional Investment
clients in the US, Canada and Europe; Assisting with theory, and
interpretation of Multifactor Models used in predicting risk of portfolios;
Resolving technical problems related to Barra’s Proprietary software;
Participating in interviewing and hiring of employees; Training new employees
on different aspects of Barra's Multi Factor Model; Gave presentations to new
hires from different global offices;
* Was the top Support Analyst in terms of Tickets Closed for 3 straight months
## Education
MA Economics, University of Arizona, Tucson, Arizona 2004
BS Economics/Finance, Minor in Statistics, California Polytechnic State
University, San Luis Obispo, CA 2001
## Other Activities
* Member of Bermuda Dunes Community Council, September 2009 - June 2013
* Member of HIMSS December 2016 - present
* Member of the American Heart Associated Executive Leadership Team, December 2020 - December 2021
* Member of the [Python Software Foundation](https://www.python.org/users/rcheley/), May 2020 - Present
* Member of the [Django Software Foundation](https://www.djangoproject.com/foundation/individual-members/), November 2021 - Present
* Ran 13 half marathons in 13 months, December 2013 - December 2014
",2025-04-02,cv,"# R. RYAN CHELEY
### July 2016 - Present
**Senior Regional Director Business Informatics, DESERT OASIS HEALTHCARE**
Lead strategic development and implementation of Business Information systems
and reporting processes across multiple Heritage Provider Network companies,
including Desert Oasis Healthcare, Arizona Priority Care, and Heritage Victor
Valley Medical Group.
Direct a team managing critical …
",CV,https://www.ryancheley.com/pages/cv/
ryan,technology,"[Dr Drang has posted on Daylight Savings in the
past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a
recent [post](http://leancrew.com/all-this/2018/03/one-table-following-
another/) he critiqued (rightly so) the data presentation by a journalist at
the Washington Post on Daylight Savings, and that got me thinking.
In the post he generated a chart showing both the total number of daylight
hours and the sunrise / sunset times in Chicago. However, initially he didn’t
post the code on how he generated it. The next day, in a follow up
[post](http://leancrew.com/all-this/2018/03/the-sunrise-plot/), he did and
that **really** got my thinking.
I wonder what the chart would look like for cities up and down the west coast
(say from San Diego, CA to Seattle WA)?
Drang’s post had all of the code necessary to generate the graph, but for the
data munging, he indicated:
> > If I were going to do this sort of thing on a regular basis, I’d write a
> script to handle this editing, but for a one-off I just did it “by hand.”
Doing it by hand wasn’t going to work for me if I was going to do several
cities and so I needed to write a parser for the source of the data ([The US
Naval Observatory](http://aa.usno.navy.mil)).
The entire script is on my GitHub [sunrise
_sunset_](https://github.com/ryancheley/sunrise_sunset) repo. I won’t go into
the nitty gritty details, but I will call out a couple of things that I
discovered during the development process.
Writing a parser is hard. Like _really_ hard. Each time I thought I had it, I
didn’t. I was finally able to get the parser to work o cities with `01`,
`29`,`30`, or `31` in their longitude / latitude combinations.
I generated the same graph as Dr. Drang for the following cities:
* Phoenix, AZ
* Eugene, OR
* Portland
* Salem, OR
* Seaside, OR
* Eureka, CA
* Indio, CA
* Long Beach, CA
* Monterey, CA
* San Diego, CA
* San Francisco, CA
* San Luis Obispo, CA
* Ventura, CA
* Ferndale, WA
* Olympia, WA
* Seattle, WA
Why did I pick a city in Arizona? They don’t do Daylight Savings and I wanted
to have a comparison of what it’s like for them!
The charts in latitude order (from south to north) are below:
San Diego

Phoenix

Indio

Long Beach

Ventura

San Luis Obispo

Monterey

San Francisco

Eureka

Eugene

Salem

Portland

Seaside

Olympia

Seattle

Ferndale

While these images do show the different impact of Daylight Savings, I think
the images are more compelling when shown as a GIF:

We see just how different the impacts of DST are on each city depending on
their latitude.
One of [Dr. Drang’s main points in support of
DST](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/) is:
> > If, by the way, you think the solution is to stay on DST throughout the
> year, I can only tell you that we tried that back in the 70s and it didn’t
> turn out well. Sunrise here in Chicago was after 8:00 am, which put school
> children out on the street at bus stops before dawn in the dead of winter.
> It was the same on the East Coast. Nobody liked that.
I think that comment says more about our school system and less about the need
for DST.
For this whole argument I’m way more on the side of CGP Grey who does a [great
job of explaining what Day Lights Time
is](https://www.youtube.com/watch?v=84aWtseb2-4).
I think we may want to start looking at a Universal Planetary time (say UTC)
and base all activities on that **regardless** of where you are in the world.
The only reason 5am _seems_ early (to some people) is because we’ve
collectively decided that 5am (depending on the time of the year) is either
**WAY** before sunrise or just a bit before sunrise, but really it’s just a
number.
If we used UTC in California (where I’m at) 5am would we 12pm. Normally 12pm
would be lunch time, but that’s only a convention that we have constructed. It
could just as easily be the crack of dawn as it could be lunch time.
Do I think a conversion like this will ever happen? No. I just really hope
that at some point in the distant future when aliens finally come and visit
us, we aren’t late (or them early) because we have such a wacky time system
here.
",2018-03-26,daylight-savings-time,"[Dr Drang has posted on Daylight Savings in the
past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a
recent [post](http://leancrew.com/all-this/2018/03/one-table-following-
another/) he critiqued (rightly so) the data presentation by a journalist at
the Washington Post on Daylight Savings, and that got me thinking.
In the post he generated a chart showing both the total number of …
",Daylight Savings Time,https://www.ryancheley.com/2018/03/26/daylight-savings-time/
ryan,technology,"Normally when I start a new Django project I’ll use the PyCharm setup wizard,
but recently I wanted to try out VS Code for a Django project and was super
stumped when I would get a message like this:
ERROR:root:code for hash md5 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type md5
ERROR:root:code for hash sha1 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha1
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 147, in
globals()[__func_name] = __get_hash(__func_name)
File ""/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py"", line 97, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type sha512
Here are the steps I was using to get started
From a directory I wanted to create the project I would set up my virtual
environment
python3 -m venv venv
And then activate it
source venv/bin/activate
Next, I would install Django
pip install django
Next, using the `startproject` command per the
[docs](https://docs.djangoproject.com/en/3.2/ref/django-admin/#startproject
""Start a new Django Project"") I would
django-admin startproject my_great_project .
And get the error message above 🤦🏻♂️
The strangest part about the error message is that it references Python2.7
everywhere … which is odd because I’m in a Python3 virtual environment.
I did a `pip list` and got:
Package Version
---------- -------
asgiref 3.3.4
Django 3.2.4
pip 21.1.2
pytz 2021.1
setuptools 49.2.1
sqlparse 0.4.1
OK … so everything is in my virtual environment. Let’s drop into the REPL and
see what’s going on

Well, that looks to be OK.
Next, I checked the contents of my directory using `tree -L 2`
├── manage.py
├── my_great_project
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
Yep … that looks good too.
OK, let’s go look at the installed packages for Python 2.7 then. On macOS
they’re installed at
/usr/local/lib/python2.7/site-packages
Looking in there and I see that Django is installed.
OK, let’s use pip to uninstall Django from Python2.7, except that `pip` gives
essentially the same result as running the `django-admin` command.
OK, let’s just remove it manually. After a bit of googling I found this
[Stackoverflow](https://stackoverflow.com/a/8146552) answer on how to remove
the offending package (which is what I assumed would be the answer, but better
to check, right?)
After removing the `Django` install from Python 2.7 and running `django-admin
--version` I get

So I googled that error message and found another answers on
[Stackoverflow](https://stackoverflow.com/a/10756446) which lead me to look at
the `manage.py` file. When I `cat` the file I get:
# manage.py
#!/usr/bin/env python
import os
import sys
...
That first line SHOULD be finding the Python executable in my virtual
environment, but it’s not.
Next I googled the error message `django-admin code for hash sha384 was not
found`
Which lead to this [Stackoverflow](https://stackoverflow.com/a/60575879)
answer. I checked to see if Python2 was installed with brew using
brew leaves | grep python
which returned `python@2`
Based on the answer above, the solution was to uninstall the Python2 that was
installed by `brew`. Now, although [Python2 has
retired](https://www.python.org/doc/sunset-python-2/), I was leery of
uninstalling it on my system without first verifying that I could remove the
brew version without impacting the system version which is needed by macOS.
Using `brew info python@2` I determined where `brew` installed Python2 and
compared it to where Python2 is installed by macOS and they are indeed
different
Output of `brew info python@2`
...
/usr/local/Cellar/python@2/2.7.15_1 (7,515 files, 122.4MB) *
Built from source on 2018-08-05 at 15:18:23
...
Output of `which python`
`/usr/bin/python`
OK, now we can remove the version of Python2 installed by `brew`
brew uninstall python@2
Now with all of that cleaned up, lets try again. From a clean project
directory:
python3 -m venv venv
source venv/bin/activate
pip install django
django-admin --version
The last command returned
zsh: /usr/local/bin/django-admin: bad interpreter: /usr/local/opt/python@2/bin/python2.7: no such file or directory
3.2.4
OK, I can get the version number and it mostly works, but can I create a new
project?
django-admin startproject my_great_project .
Which returns
zsh: /usr/local/bin/django-admin: bad interpreter: /usr/local/opt/python@2/bin/python2.7: no such file or directory
BUT, the project was installed
├── db.sqlite3
├── manage.py
├── my_great_project
│ ├── __init__.py
│ ├── __pycache__
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
And I was able to run it
python manage.py runserver

Success! I’ve still got that last bug to deal with, but that’s a story for a
different day!
## Short Note
My initial fix, and my initial draft for this article, was to use the old
adage, turn it off and turn it back on. In this case, the implementation would
be the `deactivate` and then re `activate` the virtual environment and that’s
what I’d been doing.
As I was writing up this article I was hugely influenced by the work of [Julie
Evans](https://twitter.com/b0rk) and kept asking, “but why?”. She’s been
writing a lot of awesome, amazing things, and has several [zines for
purchase](https://wizardzines.com) that I would highly recommend.
She’s also generated a few [debugging
‘games’](https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/) that are
a lot of fun.
Anyway, thanks Julie for pushing me to figure out the why for this issue.
## Post Script
I figured out the error message above and figured, well, I might as well
update the post! I thought it had to do with `zsh`, but no, it was just more
of the same.
The issue was that Django had been installed in the base Python2 (which I
knew). All I had to do was to uninstall it with pip.
pip uninstall django
The trick was that pip wasn't working out for me ... it was generating errors.
So I had to run the command
python -m pip uninstall django
I had to run this AFTER I put the Django folder back into
`/usr/local/lib/python2.7/site-packages` (if you'll recall from above, I
removed it from the folder)
After that clean up was done, everything worked out as expected! I just had to
keep digging!
",2021-06-13,debugging-setting-up-a-django-project,"Normally when I start a new Django project I’ll use the PyCharm setup wizard,
but recently I wanted to try out VS Code for a Django project and was super
stumped when I would get a message like this:
ERROR:root:code for hash md5 was not found.
Traceback …
",Debugging Setting up a Django Project,https://www.ryancheley.com/2021/06/13/debugging-setting-up-a-django-project/
ryan,productivity,"I've been an on-again, off-again user of the [Getting Things
Done](http://gettingthingsdone.com) methodology for several years now. I'm
also a geek, so I indulge my inner geekiness and like to have 'tools' to help
me with my hobbies / neuroses. Enter
[Omnifocus](https://www.omnigroup.com/omnifocus/) an amazing GTD application
created by [The Omni Group](https://www.omnigroup.com).
I have always enjoyed how easy it is to sync between each of my devices using
the Omni Sync Server so that my iPhone knows what changes I made on my iMac.
It's pretty sweet, but lately I've gotten overwhelmed with the cruft in my
Omnifocus database. So much so that I've actually stopped using OmniFocus as
my GTD application of choice and have 'gone rogue' and am not using anything
right now. Actually, I haven't used anything for several weeks now. It's
starting to get to me.
Tonight I decided, the hell with it. I'm ignoring my 'todo' list anyway, why
now just declare 'OmniFocus / GTD bankruptcy' and start the whole darn thing
over again.
In order to make 'all my troubles go away' I found
[this](https://support.omnigroup.com/omnifocus-reset-database/) article on the
[OmniGroup's support forum](https://support.omnigroup.com/) ... which BTW is a
great place for all things OmniFocus!
using the instructions I found where the `ofocus` file was located and changed
it's name from this:

to this:

Then I just followed the steps 5 - 11 and magically all of my tasks were gone.
Just. Like. That.
Then I had to update my iOS OmniFocus, but that wasn't an issue. Just selected
the 'Keep Sync Database' to over write the database on iOS and I was all set.
Doing this loses ALL data, including the Contexts, and Perspectives, but I can
create ones that I need easily enough. There's this guy called
[MacSparky](https://www.macsparky.com) that's kind of a savant about this
stuff. I'm sure he's got a post or two that can help.
I don't know that I'll do better this time, or that I won't just do this again
in 6 months, or 12 months, or 5 years ... but for right now, it's what I need
to do so I can get back to Getting Things Done.
",2016-11-29,declaring-omnifocus-bankrupty,"I've been an on-again, off-again user of the [Getting Things
Done](http://gettingthingsdone.com) methodology for several years now. I'm
also a geek, so I indulge my inner geekiness and like to have 'tools' to help
me with my hobbies / neuroses. Enter
[Omnifocus](https://www.omnigroup.com/omnifocus/) an amazing GTD application
created by [The Omni Group](https://www.omnigroup.com).
I …
",Declaring Omnifocus Bankrupty,https://www.ryancheley.com/2016/11/29/declaring-omnifocus-bankrupty/
ryan,technology,"## Previous Efforts
When I first heard of Django I thought it looks like a really interesting, and
Pythonic way, to get a website up and running. I spent a whole weekend putting
together a site locally and then, using Digital Ocean, decided to push my idea
up onto a live site.
One problem that I ran into, which EVERY new Django Developer will run into
was static files. I couldn’t get static files to work. No matter what I did,
they were just … missing. I proceeded to spend the next few weekends trying to
figure out why, but alas, I was not very good (or patient) with reading
documentation and gave up.
Fast forward a few years, and while taking the 100 Days of Code on the Web
Python course from Talk Python to Me I was able to follow along on a part of
the course that pushed up a Django App to Heroku.
I wrote about that effort [here](https://pybit.es/my-first-django-app.html).
Needless to say, I was pretty pumped. But, I was wondering, is there a way I
can actually get a Django site to work on a non-Heroku (PaaS) type
infrastructure.
## Inspiration
While going through my Twitter timeline I cam across a retweet from
TestDrive.io of [Matt Segal](https://mattsegal.dev/simple-django-
deployment.html). He has an **amazing** walk through of deploying a Django
site on the hard level (i.e. using Windows). It’s a mix of Blog posts and
YouTube Videos and I highly recommend it. There is some NSFW language, BUT if
you can get past that (and I can) it’s a great resource.
This series is meant to be a written record of what I did to implement these
recommendations and suggestions, and then to push myself a bit further to
expand the complexity of the app.
## Articles
A list of the Articles will go here. For now, here’s a rough outline of the
planned posts:
* [Setting up the Server (on Digital Ocean)](/setting-up-the-server-on-digital-ocean.html)
* [Getting your Domain to point to Digital Ocean Your Server](/getting-your-domain-to-point-to-digital-ocean-your-server.html)
* [Preparing the code for deployment to Digital Ocean](/preparing-the-code-for-deployment-to-digital-ocean.html)
* [Automating the deployment](/automating-the-deployment.html)
* Enhancements
The ‘Enhancements’ will be multiple follow up posts (hopefully) as I catalog
improvements make to the site. My currently planned enhancements are:
* Creating the App
* [Migrating from SQLite to Postgres](/using-postgresql.html)
* Integrating Git
* [Having Multiple Sites on a single Server](/setting-up-multiple-django-sites-on-a-digital-ocean-server.html)
* Adding Caching
* Integrating S3 on AWS to store Static Files and Media Files
* Migrate to Docker / Kubernetes
",2021-01-24,deploying-a-django-site-to-digital-ocean-a-series,"## Previous Efforts
When I first heard of Django I thought it looks like a really interesting, and
Pythonic way, to get a website up and running. I spent a whole weekend putting
together a site locally and then, using Digital Ocean, decided to push my idea
up onto a live …
",Deploying a Django Site to Digital Ocean - A Series,https://www.ryancheley.com/2021/01/24/deploying-a-django-site-to-digital-ocean-a-series/
ryan,musings,"The number of times an issue is resolved with a simple reboot is amazing. It’s
why when you call tech support (for anything) it’s always the first thing they
ask you.
Even with my experience in tech I can forget this one little trick when
troubleshooting my own stuff. I don’t have a tech support line to call so I
have to google, and google and google, and since the assumption is that I’ve
already rebooted, it’s not a standard answer that’s put out there. (I mean, of
course I rebooted to see if that fixed the problem).
I’ve written before about my [ITFDB and the announcement from Vin Scully “It’s
Time for Dodger Baseball!”](/setting-up-itfdb-with-a-voice.html). With the
start of the 2019 season the mp3 stopped playing.
I tried all sorts of fixes. I made sure the Pi was up to date with `apt-get
update` and `apt-get upgrade`. I thought maybe the issue was due to the
version of Python running on the Pi (3.4.2). I thought maybe the mp3 had
become corrupt and tried to regenerate it.
None of these things worked. Finally I found this post and the answer was so
obvious. To quote the answer:
> Have you tried rebooting?
>
> It's a total shot in the dark, but I just transitioned from XBMC to
> omxplayer and lost sound. What I did:
>
> # apt-get remove xbmc
>
> # apt-get autoremove
>
> # apt-get update
>
> # apt-get upgrade
>
> After that I lost sound. 10 minutes of frustration later I rebooted and
> everything worked again.
It wasn’t exactly my problem, but upon seeing it I decided “What the hell?”
And you know what, it totally worked.
I wish I would have checked to see when the last time a reboot had occurred,
but it didn’t occur to me until I started writing this post. Oh well … it
doesn’t really matter because it works now.
",2019-04-07,did-you-try-restarting-it,"The number of times an issue is resolved with a simple reboot is amazing. It’s
why when you call tech support (for anything) it’s always the first thing they
ask you.
Even with my experience in tech I can forget this one little trick when
troubleshooting my own …
",Did you try restarting it?,https://www.ryancheley.com/2019/04/07/did-you-try-restarting-it/
ryan,technology,"I work at a place that is heavily investing in the Microsoft Tech Stack.
Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL
Server ... etc
When not at work, I **really** like working with Python and Django. I've never
really thought I'd be able to combine the two until I discovered the package
mssql-django which was released Feb 18, 2021 in alpha and as a full-fledged
version 1 in late July of that same year.
Ever since then I've been trying to figure out how to incorporate Django into
my work life.
I'm going to use this series as an outline of how I'm working through the
process of getting Django to be useful at work. The issues I run into, and the
solutions I'm (hopefully) able to achieve.
I'm also going to use this as a more in depth analysis of an accompanying talk
I'm hoping to give at [Django Con 2022](https://2022.djangocon.us) later this
year.
I'm going to break this down into a several part series that will roughly
align with the talk I'm hoping to give. The parts will be:
1. Introduction/Background
2. Overview of the Project
3. Wiring up the Project Models
4. Database Routers
5. Django Admin Customization
6. Admin Documentation
7. Review & Resources
My intention is to publish one part every week or so. Sometimes the posts will
come fast, and other times not. This will mostly be due to how well I'm doing
with writing up my findings and/or getting screenshots that will work.
The tool set I'll be using is:
* docker
* docker-compose
* Django
* MS SQL
* SQLite
",2022-06-15,django-and-legacy-databases,"I work at a place that is heavily investing in the Microsoft Tech Stack.
Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL
Server ... etc
When not at work, I **really** like working with Python and Django. I've never
really thought I'd be able to combine the …
",Django and Legacy Databases,https://www.ryancheley.com/2022/06/15/django-and-legacy-databases/
ryan,technology,"First, what are ""the commons""? The concept of ""the commons"" refers to
resources that are shared and managed collectively by a community, rather than
being owned privately or by the state. This idea has been applied to natural
resources like air, water, and grazing land, but it has also expanded to
include digital and cultural resources, such as open-source software,
knowledge databases, and creative works.
As Organization Administrators of Django Commons, we're focusing on
sustainability and stewardship as key aspects.
Asking for help is hard, but it can be done more easily in a safe environment.
As we saw with the [xz utils
backdoor](https://en.wikipedia.org/wiki/XZ_Utils_backdoor) attack, maintainer
burnout is real. And while there are several arguments about being part of a
'supply chain' if we can, as a community, offer up a place where maintainers
can work together for the sustainability and support of their packages, Django
community will be better off!
From the [README](https://github.com/django-
commons/membership/blob/main/README.md) of the membership repo in Django
Commons
> Django Commons is an organization dedicated to supporting the community's
> efforts to maintain packages. It seeks to improve the maintenance experience
> for all contributors; reducing the barrier to entry for new contributors and
> reducing overhead for existing maintainers.
OK, but what does this new organization get me as a maintainer? The (stretch)
goal is that we'll be able to provide support to maintainers. Whether that's
helping to identify best practices for packages (like requiring tests), or
normalize the idea that maintainers can take a step back from their project
and know that there will be others to help keep the project going. Being able
to accomplish these two goals would be amazing ... but we want to do more!
In the long term we're hoping that we're able to do something to help provide
compensation to maintainers, but as I said, that's a long term goal.
The project was spearheaded by Tim Schilling and he was able to get lots of
interest from various folks in the Django Community. But I think one of the
great aspects of this community project is the transparency that we're
striving for. You can see [here](https://github.com/orgs/django-
commons/discussions/19) an example of a discussion, out in the open, as we try
to define what we're doing, together. Also, while Tim spearheaded this effort,
we're really all working as equals towards a common goal.
What we're building here is a sustainable infrastructure and community. This
community will allow packages to have a good home, to allow people to be as
active as they want to be, and also allow people to take a step back when they
need to.
Too often in tech, and especially in OSS, maintainers / developers will work
and work and work because the work they do is generally interesting, and has
interesting problems to try and solve.
But this can have a downside that we've all seen .. burnout.
By providing a platform for maintainers to 'park' their projects, along with
the necessary infrastructure to keep them active, the goal is to allow
maintainers the opportunity to take a break if, or when, they need to. When
they're ready to return, they can do so with renewed interest, with new
contributors and maintainers who have helped create a more sustainable
environment for the open-source project.
The idea for this project is very similar to, but different from, Jazz Band.
Again, from the [README](https://github.com/django-
commons/membership/blob/main/README.md)
> Django Commons and Jazzband have similar goals, to support community-
> maintained projects. There are two main differences. The first is that
> Django Commons leans into the GitHub paradigm and centers the organization
> as a whole within GitHub. This is a risk, given there's some vendor lock-in.
> However, the repositories are still cloned to several people's machines and
> the organization controls the keys to PyPI, not GitHub. If something were to
> occur, it's manageable.
>
> The second is that Django Commons is built from the beginning to have more
> than one administrator. Jazzband has been [working for a while to add
> additional roadies](https://github.com/jazzband/help/issues/196)
> (administrators), but there hasn't been visible progress. Given the
> importance of several of these projects it's a major risk to the community
> at large to have a single point of failure in managing the projects. By
> being designed from the start to spread the responsibility, it becomes
> easier to allow people to step back and others to step up, making Django
> more sustainable and the community stronger.
One of the goals for Django Commons is to be very public about what's going
on. We actively encourage use of the
[Discussions](https://github.com/orgs/django-commons/discussions) feature in
GitHub and have several active conversations happening there now1 2 3
So far we've been able to migrate ~3~ 4 libraries4 5 6 7into Django Commons.
Each one has been a great learning experience, not only for the library
maintainers, but also for the Django Commons admins.
We're working to automate as much of the work as possible. [Daniel
Moran](https://github.com/cunla/) has done an amazing job of writing Terraform
scripts to help in the automation process.
While there are still several manual steps, with each new library, we discover
new opportunities for automation.
This is an exciting project to be a part of. If you're interested in joining
us you have a couple of options
1. [Transfer your project](https://github.com/django-commons/membership/issues/new?assignees=django-commons%2Fadmins&labels=Transfer+project+in&projects=&template=transfer-project-in.yml&title=%F0%9F%9B%AC+%5BINBOUND%5D+-+%3Cproject%3E) into Django Commons
2. [Join as member](https://github.com/django-commons/membership/issues/new?assignees=django-commons%2Fadmins&labels=New+member&projects=&template=new-member.yml&title=%E2%9C%8B+%5BMEMBER%5D+-+%3Cyour+handle%3E) and help contribute to one of the projects that's already in Django Commons
I'm looking forward to seeing you be part of this amazing community!
1. [How to approach existing libraries](https://github.com/orgs/django-commons/discussions/52) ↩︎
2. [Creating a maintainer-contributor feedback loop](https://github.com/orgs/django-commons/discussions/61) ↩︎
3. [DjangoCon US 2024 Maintainership Open pace](https://github.com/orgs/django-commons/discussions/42) ↩︎
4. [django-tasks-scheduler](https://github.com/django-commons/django-tasks-scheduler) ↩︎
5. [django-typer](https://github.com/django-commons/django-typer) ↩︎
6. [django-fsm-2](https://github.com/django-commons/django-fsm-2) ↩︎
7. [django-debug-toolbar](https://github.com/django-commons/django-debug-toolbar/) ↩︎
",2024-10-23,django-commons,"First, what are ""the commons""? The concept of ""the commons"" refers to
resources that are shared and managed collectively by a community, rather than
being owned privately or by the state. This idea has been applied to natural
resources like air, water, and grazing land, but it has also expanded …
",Django Commons,https://www.ryancheley.com/2024/10/23/django-commons/
ryan,technology,"I’ve been working on a Django Project for a while and one of the apps I have
tracks candidates. These candidates have dates of a specific type.
The models look like this:
## Candidate
class Candidate(models.Model):
first_name = models.CharField(max_length=128)
last_name = models.CharField(max_length=128)
resume = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True)
cover_leter = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True)
email_address = models.EmailField(blank=True, null=True)
linkedin = models.URLField(blank=True, null=True)
github = models.URLField(blank=True, null=True)
rejected = models.BooleanField()
position = models.ForeignKey(
""positions.Position"",
on_delete=models.CASCADE,
)
hired = models.BooleanField(default=False)
## CandidateDate
class CandidateDate(models.Model):
candidate = models.ForeignKey(
""Candidate"",
on_delete=models.CASCADE,
)
date_type = models.ForeignKey(
""CandidateDateType"",
on_delete=models.CASCADE,
)
candidate_date = models.DateField(blank=True, null=True)
candidate_date_note = models.TextField(blank=True, null=True)
meeting_link = models.URLField(blank=True, null=True)
class Meta:
ordering = [""candidate"", ""-candidate_date""]
unique_together = (
""candidate"",
""date_type"",
)
## CandidateDateType
class CandidateDateType(models.Model):
date_type = models.CharField(max_length=24)
description = models.CharField(max_length=255, null=True, blank=True)
You’ll see from the CandidateDate model that the fields `candidate` and
`date_type` are unique. One problem that I’ve been running into is how to help
make that an easier thing to see in the form where the dates are entered.
The Django built in validation will display an error message if a user were to
try and select a `candidate` and `date_type` that already existed, but it felt
like this could be done better.
I did a fair amount of Googling and had a couple of different _bright_ ideas,
but ultimately it came down to a pretty simple implementation of the `exclude`
keyword in the ORM
The initial `Form` looked like this:
class CandidateDateForm(ModelForm):
class Meta:
model = CandidateDate
fields = [
""candidate"",
""date_type"",
""candidate_date"",
""meeting_link"",
""candidate_date_note"",
]
widgets = {
""candidate"": HiddenInput,
}
I updated it to include a `__init__` method which overrode the options in the
drop down.
def __init__(self, *args, **kwargs):
super(CandidateDateForm, self).__init__(*args, **kwargs)
try:
candidate = kwargs[""initial""][""candidate""]
candidate_date_set = CandidateDate.objects.filter(candidate=candidate).values_list(""date_type"", flat=True)
qs = CandidateDateType.objects.exclude(id__in=candidate_date_set)
self.fields[""date_type""].queryset = qs
except KeyError:
pass
Now, with this method the drop down will only show items which can be
selected, not all `CandidateDateType` options.
Seems like a better user experience AND I got to learn a bit about the Django
ORM
",2021-01-23,django-form-filters,"I’ve been working on a Django Project for a while and one of the apps I have
tracks candidates. These candidates have dates of a specific type.
The models look like this:
## Candidate
class Candidate(models.Model):
first_name = models.CharField(max_length=128)
last_name = models.CharField(max_length=128)
resume = models …
",Django form filters,https://www.ryancheley.com/2021/01/23/django-form-filters/
ryan,microblog,"Today on Mastodon [Eric Matthes](https://fosstodon.org/@ehmatthes) posted
about his library [django-simple-deploy](https://github.com/django-simple-
deploy/django-simple-deploy) and a plugin for it to be able to deploy to
[Digital Ocean](https://www.digitalocean.com/) and I am so pumped for this!
I said as much and Eric asked why.
My answer:
> all of my Django apps are deployed to Digital Ocean. I have a “good enough”
> workflow for deployment, but every time I need to make a new server for a
> new project I’m mostly looking through a ton of stuff to try and figure out
> what I did last time. This _mostly_ works in that after a few hours I have
> what I need, but having something simpler would be very nice … especially
> if/when I want to help someone with their own deployment to DO
The number of times I have wanted to help automate and/or make deployment
easier to Digital Ocean is numerous. It would have been extremely helpful for
me when I moved off Heroku and onto Digital Ocean as I had **no idea** how to
do the server setup, or deployment or anything remotely related.
A few years later and I still don't feel 100% comfortable with it all of the
time, and I'm a ""web professional""
Eric's tool is going to make this so much easier and I'm so here for that!
",2025-02-25,django-simple-deploy-and-digital-ocean,"Today on Mastodon [Eric Matthes](https://fosstodon.org/@ehmatthes) posted
about his library [django-simple-deploy](https://github.com/django-simple-
deploy/django-simple-deploy) and a plugin for it to be able to deploy to
[Digital Ocean](https://www.digitalocean.com/) and I am so pumped for this!
I said as much and Eric asked why.
My answer:
> all of my Django apps are deployed to Digital Ocean …
",Django Simple Deploy and Digital Ocean,https://www.ryancheley.com/2025/02/25/django-simple-deploy-and-digital-ocean/
ryan,technology,"# My Experience at DjangoCon US 2023
A few days ago I returned from DjangoCon US 2023 and wow, what an amazing
time. The only regret I have is that I didn't take very many pictures. This is
something I will need to work on for next year.
On Monday October 16th I gave a talk [Contributing to Django or how I learned
to stop worrying and just try to fix an ORM
Bug](https://2023.djangocon.us/talks/contributing-to-django-or-how-i-learned-
to-stop-worrying-and-just-try-to-fix-an-orm-bug/). The video will be posted on
YouTube in a few weeks. This was the first tech conference I've ever spoken
at!!!! I was super nervous leading up to the talk, and even a bit at the
start, but once I got going I finally settled in.
Here's me on stage taking a selfie with the crowd behind me

Luckily, my talk was one of the first non-Keynote talks so I was able to relax
and enjoy the conference while the rest of the time.
After the conference talks ended on Wednesday I stuck around for the sprints.
This is such a great time to be able to work on open source projects (Django
adjacent or not) and just generally hang out with other Djangonauts. I was
able to do some work on DjangoPackages with Jeff Triplett, and just generally
hang out with some truly amazing people.
The Django community is just so great. I've been to many conferences before,
but this one is the first where I feel like I belong.
I am having some of those post conference blues, but thankfully Kojo Idrissa
wrote something about how to [help with
that](https://kojoidrissa.com/conferences/community/pycon%20africa/noramgt/2019/08/11/post_conference_depression.html).
And taking his advice, it has been helpful to come down from the Conference
high.
Although the location of DjangoCon US 2024 hasn't been announced yet, I'm
making plans to attend.
I am also setting myself some goals to have completed by the start of DCUS
2024
* join the fundraising working group
* work on at least 1 code related ticket in Trac
* work on at least 1 doc related ticket in Trac
* have been part of a writing group with fellow Djangonauts and posted at least 1 article per month
I had a great experience speaking, and I **think** I'd like to do it again,
but I'm still working through that.
It's a lot harder to give a talk than I thought it would be! That being said,
I do have in my 'To Do' app a task to 'Brainstorm DjangoCon talk ideas' so
we'll see if (1) I'm able to come up with anything, and (2) I have a talk
accepted for 2024.
",2023-10-24,djangocon-us-2023,"# My Experience at DjangoCon US 2023
A few days ago I returned from DjangoCon US 2023 and wow, what an amazing
time. The only regret I have is that I didn't take very many pictures. This is
something I will need to work on for next year.
On Monday October …
",DjangoCon US 2023,https://www.ryancheley.com/2023/10/24/djangocon-us-2023/
ryan,musings,"# DjangoCon US 2024
I was able to attend [DCUS 2024](https://2024.djangocon.us) this year in
Durham from September 22 - September 27, and just like in 2023, it was an
amazing experience.
I gave another [talk](https://www.youtube.com/watch?v=JLYaAYY4JPc) (hooray!)
and got to hang out with some truly amazing people, many of whom I call my
friends.
I was fortunate in that my talk was on Monday morning, so as soon as my talk
was done, I could focus on the conference and less on being nervous about my
talk!
One thing I took advantage of this year, that I didn't in previous years, was
the 'Hallway Track'. I really enjoyed that time on Monday afternoon to
decompress with some of the other speakers in the lobby.
One of the talks that I was able to watch since the conference was
[Troubleshooting is a Lifestyle
😎](https://www.youtube.com/watch?v=a7iUKbug82k) which had this great note:
Asking for help is not a sign of failure - it's a strategy.
I am bummed that I missed a few talks live ([Product 101 for Techies and Tech
Teams](https://www.youtube.com/watch?v=75M0MC66H2o), [Passkeys: Your password-
free future](https://www.youtube.com/watch?v=ylv_k8TRpPk), and [Django: the
web framework that changed my
life](https://www.youtube.com/watch?v=X0Urp3RsKLY)) but I will go back and
watch them in the next several days and I'm really looking forward to that.
There is a great
[playlist](https://www.youtube.com/playlist?list=PL2NFhrDSOxgWqE_5w5CX2iUR7-P1D0ny7)
of ALL of the talks from this year (and previous years) that I highly
recommend you search through and watch!
A few others have written about their experiences ([Mario
Munoz](https://pythonbynight.com/blog/djangocon-2024) and [Will
Vincent](https://wsvincent.com/djangoconus-recap/)) and you should totally
read those. Some of the
## The Food
DCUS via the culinary experience!
Durham has some of the best food and I would go back again JUST for the food.
Some of my highlights were
* [Cheeni](https://www.cheenidurham.com/)
* [Thaiangle of Durham](https://www.thaiangleofdurham.com/)
* [Queeny's](https://www.queenysdurham.com/)
* [Ponysaurus](https://www.ponysaurusbrewing.com/)
* [Cocoa Cinnamon](https://littlewaves.coffee/pages/old-north-durham?srsltid=AfmBOooaYRO5ZB5bS9mZ43O1J_lVMyXSD_4ma0i8GZjaRg7UxcOgPaAm)
* [Pizza Torro](https://pizzeriatoro.com/)
* The conference venue food - fried chicken and peach cobbler were my favorite
## The Sprints
During the sprints I was able to work on a few tickets for DjangoPackages12
and get some clarification on a Django doc3 ticket that's I've been wanting to
work on for a while now.
## The after party in Palm Springs
I left Durham _very_ early on Saturday morning to head back home to Southern
California. Leaving a great conference like DjangoCon US can be hard as Kojo
[has written
about](https://kojoidrissa.com/conferences/community/pycon%20africa/noramgt/2019/08/11/post_conference_depression.html).
One upside for me was knowing that a few people from the conference were road
tripping out to California and they were going to stop and visit! The
following week I had a great dinner with Thibaud, Sage, and Storm at
[Tac/Quila](https://tacquila.com/)
Here's [a toot on
Mastodon](https://mastodon.social/@ryancheley/113237643354514479) with a
picture of the 4 of us after dinner
## Looking Forward
I just feel so much more clam after the conference, and am super happy.
I'm looking forward to my involvement in the Django Community until the next
DjangoCon I'm able to attend4. Some things specifically are:
* Working on Django tickets
* Admin work with Django Commons with Tim, Lacey, Daniel, and Storm
* Working on Django Packages with Jeff and Maksudul
* Djangonaut Space (if and when they need a navigator but just hanging out in the discord is pretty awesome too!)
I'm so grateful for the friends and community that Django has given to me. I'm
really hoping to be able to pay it forward with my involvement over the next
year until I have a chance to see all of these amazing people in person again
1. settings consolidaion ↩︎
2. docs update ↩︎
3. 27106 ↩︎
4. I'm working really hard on DCEU but the timing may not work out ↩︎
",2024-11-17,djangocon-us-2024,"# DjangoCon US 2024
I was able to attend [DCUS 2024](https://2024.djangocon.us) this year in
Durham from September 22 - September 27, and just like in 2023, it was an
amazing experience.
I gave another [talk](https://www.youtube.com/watch?v=JLYaAYY4JPc) (hooray!)
and got to hang out with some truly amazing people, many of whom I call my …
",DjangoCon US 2024,https://www.ryancheley.com/2024/11/17/djangocon-us-2024/
ryan,technology,"At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing
for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-
talk-at-a-conference/)
Well, I spoke again at DjangoCon US this year (2024) and had a similar, but
wildly different experience in preparing for my talk.
Last year I lamented that I didn't really track my time (which is weird
because I track my time for ALL sorts of things!).
This year, I did track my time and have a much better sense of how much time I
prepared for the talk.
Another difference between each year is that in 2023 I gave a 45 minute talk,
while this year my talk was 25 minutes.
I've heard that you need about 1 hour of prep time for each 1 minute of talk
that you're going to give. That means that, on average, for a 25 minute talk
I'd need about 25 hours of prep time.
[My time tracking shows](https://track.toggl.com/shared-
report/6c52f45a0feea26f7c8fd987abf73b2e) that I was a little short of that (19
hours) but my talk ended up being about 20 minutes, so it seems that maybe I
was on track for that.
This year, as last year, my general prep technique was to:
1. Give the presentation AND record it
2. Watch the recording and make notes about what I needed to change
3. Make the changes
I would typically do each step on a different day, though towards the end I
would do steps 2 and 3 on the same day, and during the last week I would do
all of the steps on the same day.
This flow really seems to help me get the most of out practicing my talk and
getting a sense of its strengths and weaknesses.
One issue that came up a week before I was to leave for DjangoCon US is that
my boss said I couldn't have anything directly related to my employer in the
presentation. My initial drafts didn't have specifics, but the examples I used
were too close for my comfort on that, so I ended up having to refactor that
part of my talk.
Honestly, I think it came out better because of it. During my practice runs I
felt like I was kind of dancing around topics, but once I removed them i felt
freer to just kind of speak my mind.
Preparing and giving talks like these are truly a ton of work. Yes, you'll
(most likely) be given a free ticket to the conference you're speaking at —
but unless you're a seasoned public speaker you will have to practice a lot to
give a great talk.
One thing I didn't mention in my prep time is that my talk was essentially
just a rendition of my series of blog posts I started writing at DjangoCon US
2023 ([Error Culture](https://www.ryancheley.com/2023/10/29/error-culture/))
So when you add in the time it took for me to brainstorm those articles,
write, and edit them, we're probably looking at another 5 - 7 hours of prep.
This puts me closer to the 25 hours of prep time for the 25 minute talk.
I've given 2 talks so far, and after each one I've said, 'Never again!'
It's been a few weeks since I gave my talk, and I have to say, I'm kind of
looking forward to trying to give a talk again next year. Now, I just need to
figure out what I would talk about that anyone would want to hear. 🤔
",2024-10-17,djangocon-us-2024-talk,"At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing
for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-
talk-at-a-conference/)
Well, I spoke again at DjangoCon US this year (2024) and had a similar, but
wildly different experience in preparing for my talk.
Last year I lamented that I didn't really track my …
",DjangoCon US 2024 Talk,https://www.ryancheley.com/2024/10/17/djangocon-us-2024-talk/
ryan,microblog,"Next week starts session 4 of [Djangonaut Space](https://djangonaut.space/)
and I've been selected to be the Navigator for Team Venus with an amazing
group of people. As has happened before I go into this with an impossible
amount of [imposter
syndrome](https://en.m.wikipedia.org/wiki/Impostor_syndrome) lurking over me.
While this will be my **third** time doing this it still feels all new to me
and I'm constantly worried that I'm going to ""do it wrong"".
I have the start of a plan to help with my navigator duties, and I need to get
that all written down so that I don't forget what needs to be done and when it
needs to be done by! I'm hoping that I'll be able to pick up a ticket and work
alongside my Djangonauts as I have done before, but the seasons of life can,
and do, have a way of changing quickly.
Perhaps I can just try and focus on getting [my one current In Progress
ticket](https://code.djangoproject.com/ticket/27106) wrapped up before diving
into a new one 🤔
Anyway, I'm super excited about the prospect of Session 4 and can't wait to
""meet"" my Djangonauts on our first call next week.
Here's to hoping my imposter syndrome doesn't get the better of me 🚀
",2025-02-11,djangonaut-space-session-4,"Next week starts session 4 of [Djangonaut Space](https://djangonaut.space/)
and I've been selected to be the Navigator for Team Venus with an amazing
group of people. As has happened before I go into this with an impossible
amount of [imposter
syndrome](https://en.m.wikipedia.org/wiki/Impostor_syndrome) lurking over me.
While this will be my **third** time …
",Djangonaut Space - Session 4,https://www.ryancheley.com/2025/02/11/djangonaut-space-session-4/
ryan,technology,"I had read about a project called djhtml and wanted to use it on one of my
projects. The documentation is really good for adding it to precommit-ci, but
I wasn't sure what I needed to do to just run it on the command line.
It took a bit of googling, but I was finally able to get the right incantation
of commands to be able to get it to run on my templates:
djhtml -i $(find templates -name '*.html' -print)
But of course because I have the memory of a goldfish and this is more than 3
commands to try to remember to string together, instead of telling myself I
would remember it, I simply added it to a just file and now have this recipe:
# applies djhtml linting to templates
djhtml:
djhtml -i $(find templates -name '*.html' -print)
This means that I can now run `just djhtml` and I can apply djhtml's linting
to my templates.
Pretty darn cool if you ask me. But then I got to thinking, I can make this a
bit more general for 'linting' type activities. I include all of these in my
precommit-ci, but I figured, what the heck, might as well have a just recipe
for all of them!
So I refactored the recipe to be this:
# applies linting to project (black, djhtml, flake8)
lint:
djhtml -i $(find templates -name '*.html' -print)
black .
flake8 .
And now I can run all of these linting style libraries with a single command
`just lint`
",2021-08-22,djhtml-and-justfile,"I had read about a project called djhtml and wanted to use it on one of my
projects. The documentation is really good for adding it to precommit-ci, but
I wasn't sure what I needed to do to just run it on the command line.
It took a bit of …
",djhtml and justfile,https://www.ryancheley.com/2021/08/22/djhtml-and-justfile/
ryan,technology,"In one of my [previous
posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked
through how I generated a wordcloud based on my most recent 20 tweets. I
though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com)
file names as well. just to see if I could.
When I first tried to do it (as previously stated, the Twitter Word Cloud post
was the first python script I wrote) I ran into some difficulties. I didn't
really understand what I was doing (although I still don't **really**
understand, I at least have a vague idea of what the heck I'm doing now).
The script isn't much different than the [Twitter](https://www.twitter.com)
word cloud. The only real differences are:
1. the way in which the `words` variable is being populated
2. the mask that I'm using to display the cloud
In order to go get the information from the file system I use the `glob`
library:
import glob
The next lines have not changed
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
from scipy.misc import imread
Instead of writing to a 'tweets' file I'm looping through the files, splitting
them at the `/` character and getting the last item (i.e. the file name) and
appending it to the list `f`:
f = []
for filename in glob.glob('/Users/Ryan/Dropbox/Ryan/**/*', recursive=True):
f.append(filename.split('/')[-1])
The rest of the script generates the image and saves it to my Dropbox Account.
Again, instead of using a [Twitter](https://www.twitter.com) logo, I'm using a
**Cloud** image I found [here](http://www.shapecollage.com/shapes/mask-
cloud.png)
words = ' '
for line in f:
words= words + line
stopwords = {'https'}
logomask = imread('mask-cloud.png')
wordcloud = WordCloud(
font_path='/Users/Ryan/Library/Fonts/Inconsolata.otf',
stopwords=STOPWORDS.union(stopwords),
background_color='white',
mask = logomask,
max_words=1000,
width=1800,
height=1400
).generate(words)
plt.imshow(wordcloud.recolor(color_func=None, random_state=3))
plt.axis('off')
plt.savefig('/Users/Ryan/Dropbox/Ryan/Post Images/dropbox_wordcloud.png', dpi=300)
plt.show()
And we get this:

",2016-11-25,dropbox-files-word-cloud,"In one of my [previous
posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked
through how I generated a wordcloud based on my most recent 20 tweets. I
though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com)
file names as well. just to see if I could.
When I first tried to do it …
",Dropbox Files Word Cloud,https://www.ryancheley.com/2016/11/25/dropbox-files-word-cloud/
ryan,technology,"Integrating a version control system into your development cycle is just kind
of one of those things that you do, right? I use GutHub for my version
control, and it’s GitHub Actions to help with my deployment process.
There are 3 `yaml` files I have to get my local code deployed to my production
server:
* django.yaml
* dev.yaml
* prod.yaml
Each one serving it’s own purpose
## django.yaml
The `django.yaml` file is used to run my tests and other actions on a GitHub
runner. It does this in 9 distinct steps and one Postgres service.
The steps are:
1. Set up Python 3.8 - setting up Python 3.8 on the docker image provided by GitHub
2. psycopg2 prerequisites - setting up `psycopg2` to use the Postgres service created
3. graphviz prerequisites - setting up the requirements for graphviz which creates an image of the relationships between the various models
4. Install dependencies - installs all of my Python package requirements via pip
5. Run migrations - runs the migrations for the Django App
6. Load Fixtures - loads data into the database
7. Lint - runs `black` on my code
8. Flake8 - runs `flake8` on my code
9. Run Tests - runs all of the tests to ensure they pass
name: Django CI
on:
push:
branches-ignore:
- main
- dev
jobs:
build:
runs-on: ubuntu-18.04
services:
postgres:
image: postgres:12.2
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5432:5432
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v1
- name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: psycopg2 prerequisites
run: sudo apt-get install python-dev libpq-dev
- name: graphviz prerequisites
run: sudo apt-get install graphviz libgraphviz-dev pkg-config
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install psycopg2
pip install -r requirements/local.txt
- name: Run migrations
run: python manage.py migrate
- name: Load Fixtures
run: |
python manage.py loaddata fixtures/User.json
python manage.py loaddata fixtures/Sport.json
python manage.py loaddata fixtures/League.json
python manage.py loaddata fixtures/Conference.json
python manage.py loaddata fixtures/Division.json
python manage.py loaddata fixtures/Venue.json
python manage.py loaddata fixtures/Team.json
- name: Lint
run: black . --check
- name: Flake8
uses: cclauss/GitHub-Action-for-Flake8@v0.5.0
- name: Run tests
run: coverage run -m pytest
## dev.yaml
The code here does essentially they same thing that is done in the `deploy.sh`
in my earlier post [Automating the Deployment](/automating-the-
deployment.html) except that it pulls code from my `dev` branch on GitHub onto
the server. The other difference is that this is on my UAT server, not my
production server, so if something goes off the rails, I don’t hose
production.
name: Dev CI
on:
pull_request:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.2
with:
host: ${{ secrets.SSH_HOST_TEST }}
key: ${{ secrets.SSH_KEY_TEST }}
username: ${{ secrets.SSH_USERNAME }}
script: |
rm -rf StadiaTracker
git clone --branch dev git@github.com:ryancheley/StadiaTracker.git
source /home/stadiatracker/venv/bin/activate
cd /home/stadiatracker/
rm -rf /home/stadiatracker/StadiaTracker
cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
python /home/stadiatracker/StadiaTracker/manage.py migrate
mkdir /home/stadiatracker/StadiaTracker/static
mkdir /home/stadiatracker/StadiaTracker/staticfiles
python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
systemctl daemon-reload
systemctl restart stadiatracker
## prod.yaml
Again, the code here does essentially they same thing that is done in the
`deploy.sh` in my earlier post [Automating the Deployment](/automating-the-
deployment.html) except that it pulls code from my `main` branch on GitHub
onto the server.
name: Prod CI
on:
pull_request:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- name: deploy code
uses: appleboy/ssh-action@v0.1.2
with:
host: ${{ secrets.SSH_HOST }}
key: ${{ secrets.SSH_KEY }}
username: ${{ secrets.SSH_USERNAME }}
script: |
rm -rf StadiaTracker
git clone git@github.com:ryancheley/StadiaTracker.git
source /home/stadiatracker/venv/bin/activate
cd /home/stadiatracker/
rm -rf /home/stadiatracker/StadiaTracker
cp -r /root/StadiaTracker/ /home/stadiatracker/StadiaTracker
cp /home/stadiatracker/.env /home/stadiatracker/StadiaTracker/StadiaTracker/.env
pip -q install -r /home/stadiatracker/StadiaTracker/requirements.txt
python /home/stadiatracker/StadiaTracker/manage.py migrate
mkdir /home/stadiatracker/StadiaTracker/static
mkdir /home/stadiatracker/StadiaTracker/staticfiles
python /home/stadiatracker/StadiaTracker/manage.py collectstatic --noinput -v0
systemctl daemon-reload
systemctl restart stadiatracker
The general workflow is:
1. Create a branch on my local computer with `git switch -c branch_name`
2. Push the code changes to GitHub which kicks off the `django.yaml` workflow.
3. If everything passes then I do a pull request from `branch_name` into `dev`.
4. This kicks off the `dev.yaml` workflow which will update UAT
5. I check UAT to make sure that everything works like I expect it to (it almost always does … and when it doesn’t it’s because I’ve mucked around with a server configuration which is the problem, not my code)
6. I do a pull request from `dev` to `main` which updates my production server
My next enhancement is to kick off the `dev.yaml` process if the tests from
`django.yaml` all pass, i.e. do an auto merge from `branch_name` to `dev`, but
I haven’t done that yet.
",2021-03-14,enhancements-using-github-actions-to-deploy,"Integrating a version control system into your development cycle is just kind
of one of those things that you do, right? I use GutHub for my version
control, and it’s GitHub Actions to help with my deployment process.
There are 3 `yaml` files I have to get my local …
",Enhancements: Using GitHub Actions to Deploy,https://www.ryancheley.com/2021/03/14/enhancements-using-github-actions-to-deploy/
ryan,musings,"My daughter Abby was in the Robotics class at school this year. This gave her
(and us as a family) the opportunity to go to the [Global Conference on
Educational and Robotics](https://kipr.org/global-conference-on-educational-
robotics) which was held in Norman, Oklahoma.
Being in Oklahoma we had a golden opportunity to road trip from Oklahoma back
to home in California, so we did.
The trip went like this:
Fly from San Diego to Oklahoma City via Phoenix. Once we landed we were in the
Oklahoma City / Norman area for a week as Abby competed in GCER.
While there, Emily and I were able explore quite a bit visiting Down Town
Norman very nearly every day we were there. The neatest part of the Oklahoma
segment was our drive down to Washington, OK where Emily’s grand father was
born (or spent time as a child ... I’m not really sure).
Once we left Oklahoma we started the road trip in earnest. I’ve tried to
create a Google Maps version of the trip, but the number of places we stopped
is more than you can enter into a trip in Google maps.
Here are the vital statistics:
* miles driven: 3730
* cities visited: 17
* national parks visited: 7
* Baseball games seen: 3
And here are the details:
* Norman, OK
* Joplin, MO
* St. Louis, MO
* Hermann, MO
* Jefferson City, MO
* Kansas City, MO
* Omaha, NE
* Sioux Falls, SD
* De Smet, SD
* Pierre, SD
* Black Hills, SD
* Box Elder, SD
* Rapid City, SD
* Jewel Cave
* Wind Cave
* Hot Springs, SD
* Cheyenne, WY
* Greely, CO
* Denver, CO
* Grand Junction, CO
* Arches National Park, UT
* Cedar City, UT
We got to watch the OKC Dodgers, St. Louis Cardinals, and Kansas City Royals
all play and in each case the home team won. This was good because none of the
MLB teams we saw were playing the LA Dodgers, and it’s always fun to see the
home team win.
Finally, I also learned some things on the trip:
* There's a ton of stuff to do in Norman
* Missouri is _really_ into World War I and its kind of weird
* Omaha is the Silicon Valley of the midwest ... so much so that they call it the Silicon Prairie
* Denver isn't actually in the mountains. It's just really high in the Great Plains on the way to the Rockies
* Grand Junction In NOT a mountain town
* Cedar City is more than just the little Main Street that I've seen before ... we stayed at a farm while we were there
The family is all glad to be home, and tomorrow it’s back to normal life. I
have to say, I’m really looking forward to it.
",2019-07-28,epic-family-road-trip-2019-edition,"My daughter Abby was in the Robotics class at school this year. This gave her
(and us as a family) the opportunity to go to the [Global Conference on
Educational and Robotics](https://kipr.org/global-conference-on-educational-
robotics) which was held in Norman, Oklahoma.
Being in Oklahoma we had a golden opportunity to road trip from …
",Epic Family Road trip - 2019 edition,https://www.ryancheley.com/2019/07/28/epic-family-road-trip-2019-edition/
ryan,musings,"## What is Error Culture?
It's inevitable that at some point a service 1 will fail. When that service
fails you can either choose to be alerted, or not. Because technology is so
important to so many aspects of work, not getting an alert for a failing
service isn't really an option. So we enable alerts ... for EVERYTHING.
This is good in that we know when things have gone bad ... but it's bad in
that we can start to ignore these alerts because we get false positives. If
you hear comments like,
> Oh yeah, that error always comes up, but we just ignore it because it
> doesn't mean anything
or
> We don't really know why that error occurs, but it doesn't seem to impact
> anything, so we just ignore it
This is what I am calling, ""Error Culture"".
## OK, but why is that bad?
Initially, it might not _feel_ bad.
**EVERYONE** knows that you can ignore that error because it doesn't mean
anything. Of course, this knowledge tends to **NOT** be documented anywhere,
so when you onboard new team members they don't know what **EVERYONE** knows
... because they weren't part of the **EVERYONE** that learned the lesson.
Additionally, if you're getting error messages and nothing truly bad every
happens, then a few things can happen:
1. People start to question ALL of the alerts. I mean, if this one isn't valid, why is this OTHER one valid? Maybe I can ignore both 🤷♂️
2. You may be getting an alert about a small thing that can be ignored until it's a BIG thing. I think this image does good job of illustrating the point (found [here](https://naksecurity.medium.com/the-detriments-of-hero-culture-3fc455963d6e))

## Why does it happen?
In general, I've found that error culture can happen for a few reason
### Error Fatigue
If you get 1000 alerts every day, you're not going to be able to do anything
about anything. This is similar phenomenon to 'Alert Fatgiue' which can happen
in software applications (my experience is in Electronic Health Record
systems) where users can just click `OK` or `Cancel` when an alert shows up
and users may not actually see that there is a problem
### Lack of understanding of what the error is
It's surprising to find that people that receive alerts and they just delete
them. They do this not out malice, but because they honeslty do not know what
the alert is for. They were maybe opted into the alert (with no way to opt
out) and therefore have no idea why they get it or what they are supposed to
do with it. They may also be in an organization where asking questions to
learn isn't encouraged and will therefore not ask why they are getting the
alert.
### Lack of understanding of why the error is important
Related to the item above, but different, a person can receive an alert, but
they don't understand why it's important. This is usually manifested in some
of the things mentioned before. Ideas like,
> well, I've ignored this alert every day for 6 months, I don't know why I
> need to do anything about it now!
### Lack of understand of who the error will impact
I'm reminded of the Episode of
[Friends](https://youtu.be/pMuVm1Y669U?si=--E-MDfTWPlHjBqk&t=180) where there
is a light switch in Chandler and Joey's apartment and they don't know what
it's for. At the end of the episide Monica is idly flipping the switch off and
on and the camera pans to a Monica and Rachel's apartment where their TV keeps
turning off and on.
Error culture can have a similar feeling. If I get an error every few days,
but it doesn't impact me or my work I am likely to ignore it. It could be that
the error is unimporatnt for me, but HUGELY important for you. This is a case
where the error is being directled incorrectly. If we both got the error you
could see that I got the email and then ask, hey, are you going to do anything
about this?
### Emphasis on Hero Culture
This is probably the worst of all possibilities. Some cultures tend to
emphasize Heroes or White Knights. They appreciate when someone comes in and
'Saves the Day'. Sometimes people get promoted because of this.
This tends to disincentivize the idea of fixing small problems before they
become BIG problems. I might be getting an alert about an issue, but it's not
a BIG deal and won't be for some time. Once it becomes a big deal I'll know
how to fix it quickly, and I will. When I do, I'll be celebrated. Who wouldn't
want that?
In this post I've identified some of the characteristics of Error Culture.
In the next post I'll talk about how to tell if you're in an Error Culture.
In the final post I'll write about what you might be able to do to mitigate,
and maybe even eliminate, Error Culture where you are.
1. When I say service here I mean very loosely anything from a micro service up to a physical server. ↩︎
",2023-10-29,error-culture,"## What is Error Culture?
It's inevitable that at some point a service 1 will fail. When that service
fails you can either choose to be alerted, or not. Because technology is so
important to so many aspects of work, not getting an alert for a failing
service isn't really an …
",Error Culture,https://www.ryancheley.com/2023/10/29/error-culture/
ryan,musings,"In my last post I spoke about the idea of [Error
Culture](https://www.ryancheley.com/2023/10/29/error-culture/). In that post I
define what error culture. This time I'll talk about when it starts to happen.
For a recap go back and read that before diving in here.
# When does error culture start?
Error culture can start because of internal reason, external reason, or both
and are almost always driven by the best of intentions. Error culture starts
to happen because we don't finish the alert process. That is, we set up the
alerts, but we don't indicate why they are important or what to do about them
when we're notified.
## Internal
Internal pressures driving error culture can usually be traced back to someone
(usually someone important 1) declaring that ‘we’ need to be notified of when
‘this’ happens again. In and of itself self, this is actually a really good
idea.
But if the important person doesn't identify **why** we need to be notified
all that happens is that an alert is set up and NO ONE knows what to do when
it fires off.
The opposite side of the coin here is being proactive in wanting to be
notified when a bad thing **might** happen and being notified **might** be
useful. Again, if there is no definition for why the alert might be useful,
you're simply creating noise and encouraging alerts to be ignored.
## External
External pressures that can drive error culture are similar to internal ones.
There are some slight differences though.
For example, a consultant might indicate that it is `best practice TM` to be
notified of an alert. However, they don't provide more context for why it's
best practice. It could very well be that the recommendation IS best practice,
but for a user base that is 100x your user base, or for an organization that
is 1/10th your size. Context matters and while best practices should scale,
they don't always.
Another example of external drivers are software applications provided by
third party vendors with default alerts enabled but no context or steps for
resolution. Sometimes there will be documentation describing the alert
process, but without the context for why the alert is important it's just as
likely to be ignored.
So far in this series we've seen what error culture is,and when it starts to
happen. In the next post I'll talk about how to identify if you're in an error
culture.
1. important here just means someone with influence ↩︎
",2023-11-09,error-culture-part-ii,"In my last post I spoke about the idea of [Error
Culture](https://www.ryancheley.com/2023/10/29/error-culture/). In that post I
define what error culture. This time I'll talk about when it starts to happen.
For a recap go back and read that before diving in here.
# When does error culture start?
Error culture can …
",Error Culture Part II,https://www.ryancheley.com/2023/11/09/error-culture-part-ii/
ryan,musings,"# How can I tell if I'm in an error culture?
In part 1 I spoke about the idea of [Error
Culture](https://www.ryancheley.com/2023/10/29/error-culture/). In that post I
define what error culture.
In part 2 I spoke when [Error
Culture](https://www.ryancheley.com/2023/11/09/error-culture-part-ii/) starts.
This time I'll talk about how you can tell if you're living in an Error
Culture, and what you can do about it.
Below are a couple of tell-tale signs I've found to determine if you're living
in an error culture.
## Email Rules
You start your day and fire up your email client. As the application opens up
you see the number of unread message go from 500 down to 20. You think back to
a time when you would open your email client and have to trod through ALL 500
of those emails. Now though ... now you've outsmarted the email system by
implementing several rules to ignore or hide those pesky emails that don't
seem to mean anything.
## Instinct to just delete emails
Maybe you don't know about the amazing opportunities that email client rules
offer, so you start going through your emails. You delete the ones you
**know** aren't useful or don't mean anything.
Or maybe you do know about rules and of the remaining 20 you notice a few new
emails that you don't need to act on. Your first instinct is to delete them,
but you remember you are a smart email user and create a new rule to get rid
of those emails as well.
## Why do I get this email anyway?
If you use rules, you recall a time before you had them. A time when you would
methodically read each email and write down a quick note to ask a co-worker,
or your boss at your next one on one. But when you brought up the alerts you
had one of two reactions:
* Oh those ... yeah, you can just delete them. They don't mean anything
* Ugh ... how do you **not** know what that is for? Fine, let me explain it to you ... **again**
The first item is definitely error culture. The second response could be error
culture if the person you've asked is just so overwhelmed with all of the
alerts ... OR it could just be a toxic culture. If it's a toxic culture, I'm
sorry, but this post might not be helpful in solving that problem.
If you're not in the second situation you may (rightfully) ask
> why do we get it if we can just delete it?
And if the answer is 🤷♂️ then you might be in an error culture.
In general, if no one knows WHY we're getting an email and there is no
actionable direction, you might be in an error culture.
## Email Alerts
Ask yourself, your peers, and your boss this question
> Is this alert we are getting actually important?
If the answer is No, then delete the mechanism that generates the error. Don't
just create a rule to delete the alert.
If the answer is Yes, then ask
> Is the alert you are getting actionable?
If the answer is No then update the alert to be actionable. This can be done
by
1. Including steps to resolution or documentation link for resolving the error
2. Update the alert to indicate it’s importance
3. Update the alert to go to the correct people
If the answer is Yes then
1. Make sure the error indicates what the fix needs to be
2. Make sure the error indicates why it’s important, or a link to documentation that explains it
3. Make sure the right people are being notified
Point three here is really important. To determine if the correct people are
being notified ask this questions of EVERYONE that receives the alert:
> Are you the correct person to do something to fix the error
If the answer is No then getting removed from the email is the best course of
action.
Of course, it could be that no one ever told you why you were getting the
alert so the decision to remove people from alerts may need to be a management
level decision, but it can at least start the conversation.
If the answer is Yes then do you (i.e. the person being asked) know what to do
to fix the error
Again, with a simple yes or no response, you have two options:
Yes: Does the error indicate what the fix needs to be or where to go to find
out? No: Work to update the error to make it actionable
This can help to get the right people getting the alerts.
Below is a flow chart to help make alerts better

None of this is easy to change. You may have managers that don't answer your
questions when asking about if someone should receive an alert.
You may not get feedback from your peers, or manager about cleaning up the
alert system. But if you can become a champion for the effort it will be very
helpful for everyone involved.
If you implement something like this you can increase the signal to noise
ratio for you and your team. That seems like a big win for everyone.
",2023-11-14,error-culture-part-iii,"# How can I tell if I'm in an error culture?
In part 1 I spoke about the idea of [Error
Culture](https://www.ryancheley.com/2023/10/29/error-culture/). In that post I
define what error culture.
In part 2 I spoke when [Error
Culture](https://www.ryancheley.com/2023/11/09/error-culture-part-ii/) starts.
This time I'll talk about how you can tell if you're living …
",Error Culture Part III,https://www.ryancheley.com/2023/11/14/error-culture-part-iii/
ryan,technology,"On my way back from Arizona a few weeks ago I decided to play around with
Drafts a bit. Now I use Drafts every day. When it went to a subscription model
more than a year ago it was a no brainer for me. This is a seriously powerful
app when you need it.
But since my initial workflows and shortcuts I've not really done too
[much](/creating-hastags-for-social-media-with-a-drafts-action.html) with it.
But after listening to some stuff from [Tim Nahumck](https://nahumck.me) I
decided I needed to invest a little time ... and honestly there's no better
time than cruising at 25k feet on your way back from Phoenix.
Ok, first of all I never really understood workspaces. I had some set up but I
didn't get it. That was the first place I started.
Each workspace can have its own action and keyboard shortcut thing which I
didn't realize. This has so much potential. I can create workspaces for all
sorts of things and have the keyboard shortcut things I need when I need them!
This alone is mind blowing and I'm disappointed I didn't look into this
feature sooner.
I have 4 workspaces set up:
* OF Templates
* O3
* Scrum
* post ideas
Initially since I didn't really understand the power of the workspace I had
them mostly as filtering tools to be used when trying to find a draft. But now
with the custom action and keyboards for each workspace I have them set up to
filter down to specific tags AND use their own keyboards.
The OF Template workspace is used to create OmniFocus projects based on
Taskpaper markup. There are a ton of different actions that I took from [Rose
Orchard](https://www.relay.fm/people/rose-orchard) (of
[Automators](https://automators.fm) fame) that help to either add items with
the correct syntax to a Task Paper markdown file OR turn the whole thing into
an OmniFocus project. Simply a life saver for when I really know all of the
steps that are going to be involved in a project and I want to write them all
down!
The O3 workspace is used for processing the notes from the one-on-one I have
with my team. There's really only two actions: Parse O3 notes and Add to O3
notes. How are these different? I have a Siri Shortcut that populates a Draft
with a template that collects the name of the person and the date time that
the O3 occurred. This is the note that is parsed by the first action. The
second action is used when someone does something that I want to remember
(either good or bad) so that I can bring it up at a more appropriate time (the
best time to tell someone about a behavior is right now, but sometimes
circumstances prevent that) so I have this little action.
In both cases they append data to a markdown file in Dropbox (i have one file
per person that reports to me). The Shortcut also takes any actions that need
to be completed and adds them to OmniFocus for me to review later.
The third workspace is Scrum. This workspace has just one action which is
""Parse scrum notes"". Again, I have a template that is generated from Siri
Shortcuts and dropped into Drafts. During the morning standup meetings I have
with my team this Draft will have the things I did yesterday, what I'm working
on today, and any roadblocks that I have. It also create a section where I can
add actions which when the draft is parsed goes into OmniFocus for me to
review later (currently the items get added with a due date of today at 1pm
... but I need to revisit that).
The last workspace is post ideas (which is where I'm writing this from). Its
custom keyboard is just a markdown one with quick ways to add markdown syntax
and a Preview button so I can see what the markdown will render out as.
It's still a work in progress as this draft will end up in Ulysses so it can
get posted to my site, [but I've seen that I can even post from Drafts to
Wordpress](https://www.macstories.net/reviews/drafts-5-4-siri-shortcuts-
wordpress-and-more/) so I'm going to give that a shot later on.
There are several other ideas I have bouncing around in my head about ideas
for potential workspaces. My only concern at this point is how many workspaces
can I have before there are too many to be used effectively.
So glad I had the time on the flight to take a look at workspaces. A huge
productivity boost for me!
",2019-05-05,figuring-out-how-drafts-really-works,"On my way back from Arizona a few weeks ago I decided to play around with
Drafts a bit. Now I use Drafts every day. When it went to a subscription model
more than a year ago it was a no brainer for me. This is a seriously powerful
app …
",Figuring out how Drafts REALLY works,https://www.ryancheley.com/2019/05/05/figuring-out-how-drafts-really-works/
ryan,microblog,"Over the last 2 seasons the Coachella Valley Firebirds were 15-1 against the
San Jose Barracuda. The one loss over those 2 seasons was a 5-3 loss at home
that was a bit closer than then score showed. Coming into this season I really
didn't have any reason to think anything other than we'd be on the same
trajectory of beating the Cuda more often than not.
I was wrong.
Coming into tonight's game the Firebirds were 0-4-0-2 against the Cuda with 2
of those loses by only 1 goal ... the hardest being the Teddy Bear Toss in San
Jose where the Cuda won 1-0. It was brutal to watch.
Coming into tonight's game I didn't have super high expectations. I texted a
friend of mine
> OK, in the previous 2 seasons the Firebirds are 15-1 against the Cuda. This
> year, they're 0-4-0-2. and 4 of those losses are 1 by 1 goal. What's even
> wilder is that 5 of Stezka's losses are against the Cuda. I think we'll be
> getting Grubauer today since Stezka is up with Seattle. I'm still a little
> unsettled about playing them, but maybe this time it will be different?
He replied
> It will be different, let's get it!
He was correct. [The Firebirds finally got a W against the
Cuda](https://theahl.com/stats/game-center/1027270) ... although the 5-3 score
was closer than I would have liked it to be ... and there were plenty of
chances for the Cuda to tie it up in the last 90 seconds. But finally, the
first win.
For the 2024-25 season we're now 1-4-0-2.
Hopefully we can keep up the winning ways!
",2025-02-23,finally,"Over the last 2 seasons the Coachella Valley Firebirds were 15-1 against the
San Jose Barracuda. The one loss over those 2 seasons was a 5-3 loss at home
that was a bit closer than then score showed. Coming into this season I really
didn't have any reason to think …
",Finally,https://www.ryancheley.com/2025/02/23/finally/
ryan,musings,"On Wednesday June 21, 2023 the local sports puck team (i.e. Hockey), the
[Coachella Valley Firebirds](https://cvfirebirds.com/) hosted [Game
7](https://theahl.com/stats/game-center/1025179) of the [Calder
Cup](https://en.wikipedia.org/wiki/Calder_Cup) Finals against the [Hershey
Bears](https://www.hersheybears.com/).
There are sports writers that can write on how the series went, better than I
can so I'll leave that to the pros. What I will talk about is why watching
that game and seeing the Firebirds lose in Overtime hit me so hard.
I'm generally an introverted person. Even before the pandemic, I wasn't
particularly fond of attending crowded events. The pandemic only intensified
my preference for solitude. Suddenly, I found myself being advised to avoid
social interactions altogether. As an introvert, the circumstances
necessitating isolation weren't exactly ideal for me, but I did appreciate the
fact that my family and I had to isolate.
However, after 2+ years of isolating from most everyone, being in large groups
would bring out anxiety. And when I say large groups I mean like 10, maybe 15
people. On December 18th there was work holiday get together, the first one
since the pandemic started. There were about 100 people in a mostly enclosed
space and I did not do well with it. Super anxious, wore a mask the entire
time, and generally ducked into the closet that also serves as my office more
than once just to get away from people.
That same night was the home opener for the Firebirds at Acrisure Arena (due
to construction delays their home arena opened 2 1/2 months after the start of
the season). I didn't know it at the time, but it was a sell out (attendance
of 10,087). This meant that I was going to a sporting event, in an enclosed
arena with 10,000+ people. To say that I nearly lost my shit would be an
understatement. The only thing that really got me to go was that the tickets I
had purchased weren't cheap, and my wife and I were going with another couple
friend.
That [first home game](https://theahl.com/stats/game-center/1024284) was
amazing. The Firebirds won 4-3 over the Tucson Roadrunners. The energy was
amazing and I decided that I _had_ to go to another game. And so I kept going.
Again and again and again. I saw 34 games in person with an average attendance
of 7,500.
I'd like to say that ""just like that"" my anxiety surrounding large indoor
gatherings was gone, but it wasn't. It took me going to lots of hockey games
to get through it.
So coming back to game 7 on Wednesday night. With less than 1 minute into the
second period the Firebirds scored their second goal to go up 2-0. The crowd
was the loudest I'd ever heard at Acrisure. Chants of ""we want the cup"" roared
through the arena. It was unreal. And I sat there and realized that if it
hadn't been for this team my anxiety surrounding large gatherings wouldn't
have gone away for probably a very long time. And other than being a HUGE fan,
I wanted the players, coaches, and team to win because they had helped me deal
with something so personal. I won't ever be able to repay them for that, but
my cheering them on to try and win the cup could maybe start.
And then the unthinkable happened. A penalty was called on the Firebirds and a
Power Play goal was scored. Then less than 4 minutes later an even strength
goal was scored and we were tied at 2 a piece.
The third period ended without any scorning by either team, and for only the
second time in Calder Cup finals history, the first time since 1953, we were
going to Overtime in a Game 7.
As we entered Overtime everyone in my section (107) was on their feet. We
stood for the entire overtime period. Cheering, and screaming (honestly, I was
still exhausted from the experience as I wrote this 2 days later).
About 2 minutes into the Overtime period Ryker Evan sent a shot on goal. From
where I was sitting I could see the flight of the puck and my heart leapt as I
thought it would find the back of the net ... but sadly it didn't. Within the
first five minutes of overtime the Firebirds had outshot the Bears 5-0. It
seemed like we were in control.
The next 10 minutes was some of the most intense back and forth hockey I'd
ever seen.
With less than 4 minutes on the clock I thought, this might go into double
overtime ... and then the unthinkable happened. The Firebirds defense was
unable to clear a puck in their end, lots of players in front of the net, and
just like that I see a puck flying over Joey's shoulder and past the cross
bar, hitting the back of the net. The Bears player and their fans roared with
joy, and suddenly a once deafening Acrisure was stunned into silence.
We lost. They won. The inaugural season was over. I stood in disbelief for a
minute and then just sat down and stared across the arena at the Bears fans I
could see that were losing their minds with joy. I wanted to cry. Some people
around me did.
I stood up and looked over at our defensive end. The Firebirds players on the
ice had taken a knee as they watched the Bears players celebrate. They don't
show that part on TV. The defeated team looking sadly on as the victors
celebrate. It was heartbreaking.
And then, in the middle of the celebration, the chants of ""Let's go Firebirds""
started. In short order, the fans were all saying it as loud as they could. An
amazing season that didn't end the way we wanted it to, but we did our best to
let the team know what they meant to us.
When I started writing this I thought maybe it was just me that needed
something like this to get over some of the anxiety of large indoor
gatherings, but maybe it was others. And those others at that game let the
team know how much we appreciated them and what they did. This team will
always hold a special place in the hearts of it's fans.
We didn't win it all this year, but there's always next year. Always.
## Postlude
A friend of a friend of a friend works at a golf course called the 'Classic
Club'. There were 3 players that were golfing the next day and they told this
friend of a friend of a friend that the chants of ""Let's go Firebirds"" even
after the loss meant so much to them.
",2023-07-01,firebirds-inaugural-season,"On Wednesday June 21, 2023 the local sports puck team (i.e. Hockey), the
[Coachella Valley Firebirds](https://cvfirebirds.com/) hosted [Game
7](https://theahl.com/stats/game-center/1025179) of the [Calder
Cup](https://en.wikipedia.org/wiki/Calder_Cup) Finals against the [Hershey
Bears](https://www.hersheybears.com/).
There are sports writers that can write on how the series went, better than I
can so I'll leave that to …
",Firebirds Inaugural Season,https://www.ryancheley.com/2023/07/01/firebirds-inaugural-season/
ryan,technology,"I’ve written before about how easy it is to update your version of Python
using homebrew. And it totally is easy.
The thing that isn’t super clear is that when you do update Python via
Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️
I did a bit of searching to find this nice [post on the JetBrains
forum](https://intellij-support.jetbrains.com/hc/en-
us/community/posts/360000306410-Cannot-use-system-interpreter-in-PyCharm-
Pro-2018-1) which indicated
> > unfortunately it's a known issue:
> . Please close Pycharm and
> remove jdk.table.xml file from \~/Library/Preferences/.PyCharm2018.1/options
> directory, then start Pycharm again.
OK. I removed the file, but then you have to rebuild the virtual environments
because that file is what stores PyCharms knowledge of those virtual
environments.
In order to get you back to where you need to be, do the following (after
removing the `jdk.table.xml` file:
1. pip-freeze > requirements.txt
2. Remove old virtual environment `rm -r venv`
3. Create a new Virtual Environemtn with PyCharm
1. Go to Preferences
2. Project > Project Interpreter
3. Show All
4. Click ‘+’ button
4. `pip install -r requirements.txt`
5. Restart PyCharm
6. You're back
This is a giant PITA but thankfully it didn’t take too much to find the issue,
nor to fix it. With that being said, I totally shouldn’t have to do this. But
I’m writing it down so that once Python 3.8 is available I’ll be able to
remember what I did to fix going from Python 3.7.1 to 3.7.5.
",2019-11-14,fixing-a-pycharm-issue-when-updating-python-made-via-homebrew,"I’ve written before about how easy it is to update your version of Python
using homebrew. And it totally is easy.
The thing that isn’t super clear is that when you do update Python via
Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️
I did a …
",Fixing a PyCharm issue when updating Python made via HomeBrew,https://www.ryancheley.com/2019/11/14/fixing-a-pycharm-issue-when-updating-python-made-via-homebrew/
ryan,technology,"In my last post I indicated that I may need to
> reinstalling everything on the Pi and starting from scratch
While speaking about my issues with `pip3` and `python3`. Turns out that the
fix was easier than I though. I checked to see what where `pip3` and `python3`
where being executed from by running the `which` command.
The `which pip3` returned `/usr/local/bin/pip3` while `which python3` returned
`/usr/local/bin/python3`. This is exactly what was causing my problem.
To verify what version of python was running, I checked `python3 --version`
and it returned `3.6.0`.
To fix it I just ran these commands to _unlink_ the new, broken versions:
`sudo unlink /usr/local/bin/pip3`
And
`sudo unlink /usr/local/bin/python3`
I found this answer on
[StackOverflow](https://stackoverflow.com/questions/7679674/changing-default-
python-to-another-version ""Of Course the answer was on Stack Overflow!"") and
tweaked it slightly for my needs.
Now, when I run `python --version` I get `3.4.2` instead of `3.6.0`
Unfortunately I didn’t think to run the `--version` flag on pip before and
after the change, and I’m hesitant to do it now as it’s back to working.
",2018-02-13,fixing-the-python-3-problem-on-my-raspberry-pi,"In my last post I indicated that I may need to
> reinstalling everything on the Pi and starting from scratch
While speaking about my issues with `pip3` and `python3`. Turns out that the
fix was easier than I though. I checked to see what where `pip3` and `python3`
where being …
",Fixing the Python 3 Problem on my Raspberry Pi,https://www.ryancheley.com/2018/02/13/fixing-the-python-3-problem-on-my-raspberry-pi/
ryan,technology,"I was listening to the most recent episode of
[ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer
test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard
of before.
I decided that I’d give it a shot when I got home using Python and Bash, just
to see if I could (I was sure I could, but you know, wanted to make sure).
Sure enough, with a bit of googling to remember some syntax of Python, and
learn some syntax for bash, I had two stupid little programs for fizz buzz.
## Python
def main():
my_number = input(""Enter a number: "")
if not my_number.isdigit():
return
else:
my_number = int(my_number)
if my_number%3 == 0 and my_number%15!=0:
print(""fizz"")
elif my_number%5 == 0 and my_number%15!=0:
print(""buzz"")
elif my_number%15 == 0:
print(""fizz buzz"")
else:
print(my_number)
if __name__ == '__main__':
main()
## Bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
#! /bin/bash
echo ""Enter a Number: ""
read my_number
re='^[+-]?[0-9]+$'
if ! [[ $my_number =~ $re ]] ; then
echo ""error: Not a number"" >&2; exit 1
fi
if ! ((my_number % 3)) && ((my_number % 15)); then
echo ""fizz""
elif ! ((my_number % 5)) && ((my_number % 15)); then
echo ""buzz""
elif ! ((my_number % 15)) ; then
echo ""fizz buzz""
else
echo my_number
fi
---|---
And because if it isn’t in GitHub it didn’t happen, I committed it to my
[fizz-buzz repo](https://github.com/ryancheley/fizz-buzz).
I figure it might be kind of neat to write it in as many languages as I can,
you know … for when I’m bored.
",2018-11-28,fizz-buzz,"I was listening to the most recent episode of
[ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer
test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard
of before.
I decided that I’d give it a shot when I got home using Python and Bash, just
to see if I could …
",Fizz Buzz,https://www.ryancheley.com/2018/11/28/fizz-buzz/
ryan,technology,"[Last October it was announced](https://www.fiercehealthcare.com/health-
tech/google-health-notches-another-provider-partner-care-studio) that Desert
Oasis Healthcare (the company I work for) signed on to pilot [Google's Care
Studio](https://health.google/caregivers/care-studio/). DOHC is the first
ambulatory clinic to sign on.
I had been in some of the discovery meetings before the announcement and was
really excited about the opportunity. So far our use of any Cloud platforms at
work has been extremely limited (that is to say, we don't use ANY of the big
three cloud solutions for our tech) so this seemed to provide a really good
opportunity.
As we worked through the project scoping there were conversations about the
handoff to DOHC and it occurred to me that I didn't have any knowledge of what
GCP offered, what any of it did, or how any of it could work.
I've had on my 'To Do' list to learn one of the Big Three Cloud services (AWS,
Azure, or GCP) but because we didn't use ANY of them at work I was (a) worried
about picking the 'wrong' one and (b) worried that even if I picked one I'd
NEVER be able to use it!
The partnership with Google changed that. Suddenly which cloud service to
learn was apparent AND I'd be able to use whatever I learned for work!
Great, now I know which cloud service to start to learn about ... the next
question is, ""What do I try to learn?"". In speaking with some of the folks at
Google they recommended one of three Certification options:
1. [Digital Cloud Leader](https://cloud.google.com/certification/cloud-digital-leader)
2. [Cloud Engineer](https://cloud.google.com/certification/cloud-engineer)
3. [Cloud Architect](https://cloud.google.com/certification/cloud-architect)
After reviewing each of them and having a good idea of what I **need** to know
for work, I opted for the Cloud Architect path.
Knowing which certification I was going to work towards, I started to see what
learning options were available for me. It just so happens that [Coursera
partnered with the California State Library to offer free
training](https://blog.coursera.org/coursera-partners-with-the-california-
state-library-to-launch-free-statewide-job-training-program/) which is great
because Coursera has [a learning path for the Cloud Architect
Exam](https://www.coursera.org/professional-certificates/gcp-cloud-architect)!
So I signed up for the first course of that path right before Thanksgiving and
started to work my way through the courses.
I spent most of the holidays working through these courses, going pretty fast
through them. The labs offered up are so helpful. They actually allow you to
work with GCP for FREE during your labs which is amazing.
After I made my way through the Coursera learning Path I bought the book
[Google Cloud Certified Professional Cloud Architect Study
Guide](https://www.amazon.com/dp/1119871050?psc=1&ref=ppx_yo2ov_dt_b_product_details)
which was really helpful. It came with 100 electronic flash cards and 2
practice exams, and each chapter had questions at the end.
I will say that the practice exams and chapter questions from the book weren't
really like the ACTUAL exam questions BUT it did help me in my learning,
especially regarding the case studies used in the exams.
I read through the book several times, and used the practice questions in the
chapters to drive what parts of the documentation I'd read to shore up my
understand of the topics.
Finally, after about 3 months of pretty constant studying I took the test. I
opted for the remote proctoring option and I'd say that I really liked this
option. I was able to take the test in the same place I had done most of my
studying. I did have to remove essentially EVERYTHING from my home office, but
not having to drive anywhere, and not having to worry about unfamiliar
surroundings really helped me out (I think).
I had 2 hours in which to answer 60 questions. My general strategy for taking
tests is to go through the test, mark questions that I'm unsure of and
eliminate answers that I know to not be true on those questions. Once I've
gone through the test I revisit all of the unsure questions and work through
those.
My final pass is to go through ALL of the questions and make sure I didn't do
something silly.
Using this strategy I used 1 hour and 50 minutes of the 2 hours ... and I
passed!
The unfortunate part of the test is that you only get a Pass or Fail so you
don't have any opportunity to know what parts of the exam you missed. Now, if
you fail this could be a huge help in working to pass it next time, but even
if you pass it I think it would be helpful to know what areas you might
struggle in.
All in all this was a pretty great experience and it's already helping with
the GCP implementation at work. I'm able to ask better questions because I'm
at least aware of the various services and what they do.
",2023-04-01,gcp-cloud-architect-exam-experience,"[Last October it was announced](https://www.fiercehealthcare.com/health-
tech/google-health-notches-another-provider-partner-care-studio) that Desert
Oasis Healthcare (the company I work for) signed on to pilot [Google's Care
Studio](https://health.google/caregivers/care-studio/). DOHC is the first
ambulatory clinic to sign on.
I had been in some of the discovery meetings before the announcement and was
really excited about the opportunity. So …
",GCP Cloud Architect Exam Experience,https://www.ryancheley.com/2023/04/01/gcp-cloud-architect-exam-experience/
ryan,musings,"I got a message on LinkedIn from a former colleague of my from [Arizona
Priority Care](https://azprioritycare.com) asking me:
> Wanted to pick your brain on something. what do you think the outlook is for
> a data analyst? Debating a masters program in that and covers a few things
> but also includes certifications in SAS. Trying to decide if that will “pay
> off” in the long run or if I should explore different disciplines.
This was a **really** good question and I thought about it a bit. My response
was:
> I think Data Analysis (or Data Science, or Analytics) are all going to play
> a huge role in business going forward and that it would be a smart move to
> get a masters degree in one of those. I would avoid any certification
> programs though, just because they can be less rigorous and don’t seem to
> have the same weight as a full degree.
>
> SAS is an interesting language, but I’d investigate what companies use SAS
> and make sure that you’d like to work for them (or in the industry). Many
> companies are turning towards open source Data Analytics tools (like R and
> Python). But in general, don’t get too hung up on the tool (SAS, Python, R)
> but really understand what you’re doing with them. Why would I choose this
> Standard Regression over Two Stage Least Squares. When do I wan to use a
> Logistics regression model and why. What does the output tell me, and what
> is it missing.
>
> Developing that understanding will allow you to really standout.
>
> Good luck with your decision. Let me know which direction you decide to go
> in,
>
> Best,
>
> Ryan
I hope that I was able to help my former colleague and was super happy that he
reached out to me.
I wanted to write this into a more public form just in case in helps someone,
or just in case I look back on it at some point and it helps me.
",2020-02-15,getting-asked-for-advice-on-being-a-data-analyst,"I got a message on LinkedIn from a former colleague of my from [Arizona
Priority Care](https://azprioritycare.com) asking me:
> Wanted to pick your brain on something. what do you think the outlook is for
> a data analyst? Debating a masters program in that and covers a few things
> but also includes …
",Getting asked for Advice on being a Data Analyst,https://www.ryancheley.com/2020/02/15/getting-asked-for-advice-on-being-a-data-analyst/
ryan,professional development,"Signing up for the actual exam may have been the most difficult and confusing
part. I had to be verified as someone that could take the test, and then my
membership needed to be verified (or something).
I received my confirmation email that I could sign up for the exam and read
through it to make sure I understood everything. Turns out, when you sign up
for the CPHIMS you need to use your FULL name (and I had just used my middle
and last name).
One email to the HIMSS people and we’re all set (need to remember that for
next time ... this exam is the real deal!)
I was going to be in Oceanside for the Fourth of July Holiday and decided to
sign up to take the exam in San Diego on the fifth. With a test date in hand I
started on my study plan.
Every night when I got home I would spend roughly 45 minutes reading the study
book, and going over Flash Cards that I had made with topics that I didn’t
understand. Some nights I took off, but it was a solid 35 days of studying for
45 minutes.
Now, 2 things I did not consider:
1. Scheduling an exam on the fifth is a little like scheduling an exam on Jan 1 ... not the best idea in the world
2. The place my family and I go to in Oceanside always has a ton of friends and family for the weekend (30+) and it would be a less than ideal place to do any last minute studying / cramming
I spent some of the preceding weekend reading and reviewing flash cards, but
once the full retinue of friends and family arrived it was pretty much over. I
had some chances to read on the beach, but for the most part my studying
stopped.
The morning of the fifth came. I made the 40 minutes drive from Oceanside to
the testing center to take the CPHIMS exam for real.
",2017-07-16,getting-cphims-certified-part-ii,"Signing up for the actual exam may have been the most difficult and confusing
part. I had to be verified as someone that could take the test, and then my
membership needed to be verified (or something).
I received my confirmation email that I could sign up for the exam …
",Getting CPHIMS(R) Certified - Part II,https://www.ryancheley.com/2017/07/16/getting-cphims-certified-part-ii/
ryan,professional development,"One of my professional goals for 2017 was to get my [CPHIMS (Certified
Professional in Healthcare Information and Management
Systems)](http://www.himss.org/health-it-certification/cphims). The CPHIMS
certification is offered through HIMSS which “Demonstrates you meet an
international standard of professional knowledge and competence in healthcare
information and management systems”.
There was no requirement for my job to get this certification, I just thought
that it would be helpful for me if I better understood the **Information and
Management Systems** part of Healthcare.
With not much more than an idea, I started on my journey to getting
certification. I did some research to see what resources were available to me
and found a Practice Exam, a Book and a multitude of other helpful study aids.
I decided to start with the Practice Exam and see what I’d need after that.
In early March I signed up for the Practice Exam. I found all sorts of reasons
to put off taking the exam, but then I noticed that my Practice Exam had an
expiration date in May. One Sunday, figure “what the hell, let’s just get this
over with” I sat down at my
[iMac](https://support.apple.com/kb/sp707?locale=en_US) and started the exam.
I really had no idea what to expect other than 100 questions. After about 20
minutes I very nearly stopped. Not because the exam was super difficult, but
because I had picked a bad time to take a practice exam. My head wasn’t really
in the game, and my heart just wanted to go watch baseball.
But I powered on through. The practice exam was nice in that it would give you
immediate feedback if you got the question right or wrong. It wouldn’t be like
that on test day, but it was good to know where I stood as I went through this
practice version.
After 50 minutes I completed the exam and saw that I had a score of 70. I
figured that wouldn’t be a passing score, but then saw that the cutoff point
was 68. So I _passed_ the practice test.
OK, now it was time to get serious. Without any studying or preparation (other
than the 8+ years in HIT) I was able to pass what is arguably a difficult
exam.
The next thing to do was to sign up for the real thing ..
",2017-07-13,getting-cphimsr-certified-part-i,"One of my professional goals for 2017 was to get my [CPHIMS (Certified
Professional in Healthcare Information and Management
Systems)](http://www.himss.org/health-it-certification/cphims). The CPHIMS
certification is offered through HIMSS which “Demonstrates you meet an
international standard of professional knowledge and competence in healthcare
information and management systems”.
There was no requirement for …
",Getting CPHIMS(R) Certified - Part I,https://www.ryancheley.com/2017/07/13/getting-cphimsr-certified-part-i/
ryan,professional development,"I walked into the testing center at 8:30 (a full 30 minutes before my exam
start time as the email suggested I do).
I signed in and was given a key for a locker for my belongings and offered use
of the restroom.
I was then asked to read some forms and then was processed. My pockets were
turned out and my glasses inspected. I signed in (again) and had the signature
on my ID scrutinized with how I signed on test day. It only took three tries
... apparently 19 year old me doesn’t sign his name like 39 year old me.
Now it was test time ... even if I could remember any of the questions I
wouldn’t be able to write about them ... but I can’t remember them so it’s not
a problem.
It took me 80 minutes to get through the real test of 115 questions (15 are
there as ‘test’ questions that don’t actually count). The only real issues I
had were:
* construction noise outside the window to my left
* the _burping_ guy to my right ... seriously bro, cut down on the breakfast burritos
* one question that I read incorrectly 4 different times. On the fifth time I finally realized my mistake and was able to answer correctly (I think). As it turned out I had guessed what I thought was the correct answer but it was still a good feeling to get the number through a calculation instead of just guessing it
When the test was completed and my questions scored the results came back. A
passing score is 600 out of 800. I scored 669 ... I am officially CPHIMS. The
scoring breakdown even shows areas where I didn’t do so well, so I know what
to focus on for the future. For reference, they are:
* Testing and Evaluation (which is surprising for me)
* Analysis (again, surprising)
* Privacy and Security (kind of figured this as it’s not part of my everyday job)
## Final Thoughts
When I set this goal for myself at the beginning of the year it was just
something that I wanted to do. I didn’t really have a reason for it other than
I thought it might be _neat_.
After passing the exam I am really glad that I did. I’ve heard myself say
things and think about things differently, like implementation using _Pilots_
versus _Big Bang_ or _By Location_ versus _By Feature_.
I’m also asking questions differently of my colleagues and my supervisors to
help ensure that the we are doing things for the right reason at the right
time.
I can’t wait to see what I try to do next
",2017-07-17,getting-cphimsr-certified-part-iii,"I walked into the testing center at 8:30 (a full 30 minutes before my exam
start time as the email suggested I do).
I signed in and was given a key for a locker for my belongings and offered use
of the restroom.
I was then asked to read …
",Getting CPHIMS(R) Certified - Part III,https://www.ryancheley.com/2017/07/17/getting-cphimsr-certified-part-iii/
ryan,technology,"I use Hover for my domain purchases and management. Why? Because they have a
clean, easy to use, not-slimy interface, and because I listed to enough Tech
Podcasts that I’ve drank the Kool-Aid.
When I was trying to get my Hover Domain to point to my Digital Ocean server
it seemed much harder to me than it needed to be. Specifically, I couldn’t
find any guide on doing it! Many of the tutorials I did find were basically
like, it’s all the same. We’ll show you with GoDaddy and then you can figure
it out.
Yes, I can figure it out, but it wasn’t as easy as it could have been. That’s
why I’m writing this up.
## Digital Ocean
From Droplet screen click ‘Add a Domain’

Add 2 ‘A’ records (one for www and one without the www)

Make note of the name servers

## Hover
In your account at Hover.com change your Name Servers to Point to Digital
Ocean ones from above.

## Wait
DNS … does anyone _really_ know how it works?1 I just know that sometimes when
I make a change it’s out there almost immediately for me, and sometimes it
takes hours or days.
At this point, you’re just going to potentially need to wait. Why? Because DNS
that’s why. Ugh!
## Setting up directory structure
While we’re waiting for the DNS to propagate, now would be a good time to set
up some file structures for when we push our code to the server.
For my code deploy I’ll be using a user called `burningfiddle`. We have to do
two things here, create the user, and add them to the `www-data` user group on
our Linux server.
We can run these commands to take care of that:
adduser --disabled-password --gecos """" yoursite
The first line will add the user with no password and disable them to be able
to log in until a password has been set. Since this user will NEVER log into
the server, we’re done with the user creation piece!
Next, add the user to the proper group
adduser yoursite www-data
Now we have a user and they’ve been added to the group we need them to be
added. In creating the user, we also created a directory for them in the
`home` directory called `yoursite`. You should now be able to run this command
without error
ls /home/yoursite/
If that returns an error indicating no such directory, then you may not have
created the user properly.
Now we’re going to make a directory for our code to be run from.
mkdir /home/yoursite/yoursite
To run our Django app we’ll be using virtualenv. We can create our virtualenv
directory by running this command
python3 -m venv /home/yoursite/venv
## Configuring Gunicorn
There are two files needed for Gunicorn to run:
* gunicorn.socket
* gunicorn.service
For our setup, this is what they look like:
# gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
# gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=yoursite
EnvironmentFile=/etc/environment
Group=www-data
WorkingDirectory=/home/yoursite/yoursite
ExecStart=/home/yoursite/venv/bin/gunicorn
--access-logfile -
--workers 3
--bind unix:/run/gunicorn.sock
yoursite.wsgi:application
[Install]
WantedBy=multi-user.target
For more on the details of the sections in both `gunicorn.service` and
`gunicorn.socket` see this
[article](https://www.digitalocean.com/community/tutorials/understanding-
systemd-units-and-unit-files ""Understanding systemd units and unit files"").
## Environment Variables
The only environment variables we have to worry about here (since we’re using
SQLite) are the DJANGO_SECRET_KEY and DJANGO_DEBUG
We’ll want to edit `/etc/environment` with our favorite editor (I’m partial to
`vim` but use whatever you like
vim /etc/environment
In this file you’ll add your DJANGO_SECRET_KEY and DJANGO_DEBUG. The file will
look something like this once you’re done:
PATH=""/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games""
DJANGO_SECRET_KEY=my_super_secret_key_goes_here
DJANGO_DEBUG=False
## Setting up Nginx
Now we need to create our `.conf` file for Nginx. The file needs to be placed
in `/etc/nginx/sites-available/$sitename` where `$sitename` is the name of
your site. fn
The final file will look (something) like this fn
server {
listen 80;
server_name www.yoursite.com yoursite.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/yoursite/yoursite/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
The `.conf` file above tells Nginx to listen for requests to either
`www.buringfiddle.com` or `buringfiddle.com` and then route them to the
location `/home/yoursite/yoursite/` which is where our files are located for
our Django project.
With that in place all that’s left to do is to make it enabled by running
replacing `$sitename` with your file
ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled
You’ll want to run
nginx -t
to make sure there aren’t any errors. If no errors occur you’ll need to
restart Nginx
systemctl restart nginx
The last thing to do is to allow full access to Nginx. You do this by running
ufw allow 'Nginx Full'
1. Probably just [Julia Evans](https://jvns.ca/blog/how-updating-dns-works/ ↩︎
",2021-02-07,getting-your-domain-to-point-to-digital-ocean-your-server,"I use Hover for my domain purchases and management. Why? Because they have a
clean, easy to use, not-slimy interface, and because I listed to enough Tech
Podcasts that I’ve drank the Kool-Aid.
When I was trying to get my Hover Domain to point to my Digital Ocean server …
",Getting your Domain to point to Digital Ocean Your Server,https://www.ryancheley.com/2021/02/07/getting-your-domain-to-point-to-digital-ocean-your-server/
ryan,productivity,"In [my last post](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-
post/) I mentioned the steps needed in order for me to post. They are:
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to deploy the SQLite database to vercel
3. [Run `git add ` to add post to be committed to GitHub](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/)
4. Run `git commit -m ` to commit to GitHub
5. [Post to Twitter with a link to my new post](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/)
In that post I focused on number 5, posting to Twitter with a link to the post
using GitHub Actions.
In this post I'll be focusing on how I automated step 3, ""Run `git add
` to add post to be committed to GitHub"".
# Automating the `git add ...` part of my workflow
I have my pelican content set up so that the category of a post is determined
by the directory a markdown file is placed in. The structure of my content
folder looks like this:
content
├── musings
├── pages
├── productivity
├── professional\ development
└── technology
If you just just `git status` on a directory it will give you the status of
all of the files in that directory that have been changed, added, removed.
Something like this:
❯ git status
On branch main
Untracked files:
(use ""git add ..."" to include in what will be committed)
content/productivity/more-writing-automation.md
Makefile
metadata.json
That means that when you run `git add .` all of those files will be added to
git. For my purposes all that I need is the one updated file in the `content`
directory.
The command `find` does a great job of taking a directory and allowing you to
search for what you want in that directory. You can run something like
find content -name '*.md' -print
And it will return essentially what you're looking for. Something like this:
content/pages/404.md
content/pages/curriculum-vitae.md
content/pages/about.md
content/pages/brag.md
content/productivity/adding-the-new-file.md
content/productivity/omnifocus-3.md
content/productivity/making-the-right-choice-or-how-i-learned-to-live-with-limiting-my-own-technical-debt-and-just-be-happy.md
content/productivity/auto-tweeting-new-post.md
content/productivity/my-outlook-review-process.md
content/productivity/rules-and-actions-in-outlook.md
content/productivity/auto-generating-the-commit-message.md
content/productivity/declaring-omnifocus-bankrupty.md
However, because one of my categories has a space in it's name (`professional
development`) if you pipe the output of this to `xargs git add` it fails with
the error
fatal: pathspec 'content/professional' did not match any files
In order to get around this, you need to surround the output of the results of
`find` with double quotes (""). You can do this by using `sed`
find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g'
What this says is, take the output of `find` and pipe it to `sed` and use a
global find and replace to add a `""` to the start of the line (that's what the
`^` does) and then pipe that to `sed` again and use a global find and replace
to add a `""` to the end of the line (that's what the '$' does).
Now, when you run
find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g'
The output looks like this:
""content/pages/404.md""
""content/pages/curriculum-vitae.md""
""content/pages/about.md""
""content/pages/brag.md""
""content/productivity/adding-the-new-file.md""
""content/productivity/omnifocus-3.md""
""content/productivity/making-the-right-choice-or-how-i-learned-to-live-with-limiting-my-own-technical-debt-and-just-be-happy.md""
""content/productivity/auto-tweeting-new-post.md""
""content/productivity/my-outlook-review-process.md""
""content/productivity/rules-and-actions-in-outlook.md""
""content/productivity/auto-generating-the-commit-message.md""
""content/productivity/declaring-omnifocus-bankrupty.md""
Now, you can pipe your output to `xargs git add` and there is no error!
The final command looks like this:
find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g' | xargs git add
In the next post, I'll walk through how I generate the commit message to be
used in the automatic tweet!
1. `make vercel` actually runs `make html` so this isn't really a step that I need to do. ↩︎
",2022-01-26,git-add-filename-automation,"In [my last post](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-
post/) I mentioned the steps needed in order for me to post. They are:
1. Run `make html` to generate the SQLite database that powers my site's search tool1
2. Run `make vercel` to deploy the SQLite database to vercel
3. [Run `git add ` to add post to …](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/)
",git add filename automation,https://www.ryancheley.com/2022/01/26/git-add-filename-automation/
ryan,microblog,"Today was one of my better swim times for the 2000 yards that I typically swim
during the week. This was a bit surprising as it was the end of the week and I
had, what I would consider, a pretty intense gym day yesterday. That being
said, there was something about how I was able to seemingly, effortless, glide
through the water.
I also didn't track my laps for which stroke I needed to do, which is a pretty
good sing that I'm just listening to my body and switching up when I want to
and not when I need to because it's time for a new stroke.
It was also a perfect morning for a swim. Slightly cool with no breeze and a
beautiful sunrise hitting the left over storm clouds with a vibrant pink hue.
When it was all said and done I had a 2'37"" 100yd lap time on average and more
freestyle distance that breaststroke distance, which I hadn't done before.
Here's hoping to more improvement over the next few weeks in my swim time!
",2025-02-14,great-swimming,"Today was one of my better swim times for the 2000 yards that I typically swim
during the week. This was a bit surprising as it was the end of the week and I
had, what I would consider, a pretty intense gym day yesterday. That being
said, there was …
",Great Swimming,https://www.ryancheley.com/2025/02/14/great-swimming/
ryan,musings,"I'm in Orlando for [HIMSS17](http://www.himssconference.org) and and pretty
pumped for my day one session tomorrow which is titled: Business Intelligence
Best Practices: A Strong Foundation for Organizational Success.
Conferences are always a bit overwhelming, but this one is more overwhelming
than most. More than 40,000 people all gathered in one convention center to
discuss Healthcare Tech. Kind of awesome and scary!
I'm looking forward to visiting some booths in the exhibition hall, and
wandering around and stumbling onto some great new things / ideas.
I'm going to write up my impressions of the days events, hopefully including
notes, and links to tweets because the tweets will be raw and most uncensored
impressions of what I'm seeing / hearing.
Here's the HIMSS 2017!
",2017-02-19,himss-2017-day-0,"I'm in Orlando for [HIMSS17](http://www.himssconference.org) and and pretty
pumped for my day one session tomorrow which is titled: Business Intelligence
Best Practices: A Strong Foundation for Organizational Success.
Conferences are always a bit overwhelming, but this one is more overwhelming
than most. More than 40,000 people all gathered in …
",HIMSS 2017 - Day 0,https://www.ryancheley.com/2017/02/19/himss-2017-day-0/
ryan,musings,"I was able to make it to 5 educational sessions today. And the good thing is
that I learned something at each one. I think the highlight of the day for me
was actually my first session titled, _Stacking Predictive Models to Reduce
Readmissions_.
A couple of key things from that presentation was the idea of focusing on a
patient that readmits, not just from a clinical perspective, but from a human
perspective. There were lots of technology that they used to help the care
coordinators identify who was going to readmit, but the why of the readmission
was always done via human interaction. I think that may be the single most
important thing to remember.
Something else that was mentioned was that the grou got their tool out quickly
instead of trying to be perfect. It went through a couple of iterations in
order to get a tool that was usable by all their various clinics.
Some other key takeaways from today:
* We need to focus on Augmented Human Intelligence instead of Artificial Intelligence (from **How Machine Learning and AI Are Disrupting the Current Healthcare System** )
* Don’t treat Cloud Service Providers as **Plug and Play** vendors (from _HIPAA and a Cloud Computing Shared Security Model_ )
* Creation of a committee of ‘No’ to help flesh out ideas before they are implemented (from **Intrapreneurship and the Approach to Innovation From Within** )
* Think about how to operationalize insights from data, and not just explore the data (from **Beyond BI: Building Rapid-Response Advanced Analytics Unit** )
That’s a wrap on day 1 at HIMSS. Day 2 looks to be just as exciting (meet with
some vendors, attend some more educational sessions, go to a sponsored
luncheon).
",2018-03-07,himss-day-1-impressions,"I was able to make it to 5 educational sessions today. And the good thing is
that I learned something at each one. I think the highlight of the day for me
was actually my first session titled, _Stacking Predictive Models to Reduce
Readmissions_.
A couple of key things from …
",HIMSS Day 1 Impressions,https://www.ryancheley.com/2018/03/07/himss-day-1-impressions/
ryan,musings,"Day 2 was a bit more draining than day 1, but that was mostly because I made
my way into the exhibition hall for the first time. That many people and that
much cacophony always leave me a bit ... drained.
On the flip side I went to several good presentations (a couple on Block
Chain).
Today’s sessions were:
* Empowering Data Driven Health
* Blockchain 4 Healthcare: Fit for Purpose
* The Use of Blockchain to Improve Quality Outcomes
One of the more interesting things I heard today was that in Health Care, tech
spending has gone up (over the last 20 years) but so has overall health
spending. Usually we see Tech spending go up and other spending levels off (or
goes down!).
Something else to consider (that I never had) was that “we need to think about
doing what’s most cost effective for a person in their **lifetime** not just
episodically!
The Blockchain sessions I went to were enlightening, but I’m still not sure I
understand what it is and how it works (perhaps I’m just trying to make it
more complicated than it is).
That being said, the consensus was that Blockchain is not a panacea for all
the ails us. It is a tool that should be used in conjunction with current
systems, not a replacement of those systems.
Something else of note, there isn’t a single implementation of Block Chain,
there are almost 20 variations of it (although the IEEE is working on
standardizing it). This leads me to believe that it is simply too new and too
‘wild’ to be implemented just yet.
That being said, I think that if/when Microsoft bundles or includes BlockChain
(in some way) into SQL Server, then it might be the time to look at
implementing it in my organization.
In my last session (another on eon BlockChain) the idea of using BlockChain to
effect quality measures was discussed. The main point of the speaker was that
Blockchain may allow us to give agency to patients over their health data.
Another interesting point was that Blockchain may be able to allow us to
dynamically observe quality measurement instead of just at point of care. This
could lead to higher quality and lower costs.
Overall, the BlockChain talks were good, and kind of helped point me in the
right direction on what questions to start asking about it.
Well, day 2 is in the books. One more day of educational sessions and
exhibits!
",2018-03-08,himss-day-2,"Day 2 was a bit more draining than day 1, but that was mostly because I made
my way into the exhibition hall for the first time. That many people and that
much cacophony always leave me a bit ... drained.
On the flip side I went to several good presentations …
",HIMSS Day 2,https://www.ryancheley.com/2018/03/08/himss-day-2/
ryan,musings,"One of the issues that any medium- to large-organization can encounter is how
to deal with requests that place a requirement of work from one department to
another. Specifically, requests for something shiny and new (especially
technology).
In the first educational session of the day, **Strategic Portfolio Management:
“Governing the Ungoverned”** I heard [Effie
Econompolous](https://www.linkedin.com/in/effie-economopoulos-94a23a6/ ""Effie
Economopoulos"") discuss UI Health’s transformation from an organization that
had very little control over their IT projects to one that has transformed
into a highly regulated Project Management Organization.
My key takeaways from this talk were:
* segregation of Projects (with a capital P) from Incidents and Problems
* The IT Roadmap was posted on the intranet for all to see
* Projects that are ‘IT’ related don’t just include the time of resources from IT, but also time and resources from impacted departments throughout the organization
These are some amazing points. My only real question was, If you segregate
Projects from Incidents and Problems, how do you ‘train’ users for Project
submission. How are they do know the difference between the two (sometimes
users aren’t even sure which system is broken when reporting problems in the
first place). I’m not sure of the answer, but I’m sure it’s just thought more
education and tighter controls over submission of requests.
There was a real time poll during the session which asked, ‘What is the most
significant challenge in your organization?’. Fifty percent of attendees that
responded indicated inconsistent priorities as the (which is what I answered
as well). Turns out, we’re not alone.
A lot of the talk focused on the process that UI Health uses which had gone
through 3 iterations in 2 years. It seemed like it would work for a large(ish)
hospital or hospital system, but seemed too bureaucratic for my organization.
Overall, a very good talk and I’m glad I went. I believe I have some real
actionable ideas that I can take away.
My second educational session of the day **Improving Patient Health Through
Real-Time ADT Integration** I heard about a Managed Medical Group from
Minnesota and their journey to get ADT feeds into the Care Management system.
I was hoping to hear something a little more helpful, but while their
situation was similar to the one we have at my organization, it was different
enough that all I really heard was that, although my organization doesn’t have
ADT feeds (yet) we seem to be a bit ahead of them in many other areas of
managed care.
The tips that they offered up (getting user buy-in, working through issues
with all of the various vendors) were things I had already known would need to
be done.
One thing I did hear, that I hope I don’t have to go through, is a ‘Live’
testing process where we get all of the vendors, hospital IT and group IT on
the phone to test the system in a ‘Live’ environment to identify deficiencies.
I also hope that any user manual we have to create isn’t 70 pages like the one
they have (eeek!!!).
I think it will also be important to have metrics regarding efficiencies
before and after any ADT implementations to make sure that we have actually
done something helpful for the organization and the member.
My third talk **Closed Loop Referral Communications** was a bit of a
disappointment. A group from North Carolina reviewed how they closed the loop
on referral management.
I was hoping for some key insights, but it ended up being mostly about stuff
that we had already done (identifying workflow issues, automating where
possible) but they still have a fundamental issue with external provider
referrals (just like us). I guess I was just hoping that someone would have
solved that problem, but if they have, they aren’t sharing the information.
My forth session **Breaking Down Barriers with Master Data Management and Data
Governance** was really interesting and in the same vein as the first talk of
the day.
Several good points mentioned during the talk:
* Limited access to data leads to duplication of efforts and numerous sources of the ‘truth’
* If you have Tech and People then you get ‘automated chaos’ ... this is why we NEED process
* Difficult to turn data into actionable information
* Significant barriers to accessing information
* use reference data regarding report creation ... instead of asking the report requester questions, you need domain experts to define various categories (Diabetes, sepsis).
* Best Version of the Truth and Golden Record ... need to review this and see how it applies to DOHC/AZPC
The most astounding thing I heard was that each report costs between \$1k and
\$5k to create ... 40% are used 5 times or less! What a source of potential
waste that could perhaps be ‘solved’ by self service. We need to have metrics
that show not many reports have we created, but instead how many are bing
used!
The lessons learned by speaker :
* Governance: keep information at forefront for all front line areas
* Governance: not a one time effort, it’s on-going
* KPI Standardization: requires resilience
* KPI Standardization: processes that work around the system need to be identified and brought into the fold
The fifth talk of the day **From Implementation to Optimization: Moving Beyond
Operations**. Much of what was presented resonated with me and was stuff that
we have dealt with. It was nice to know that we’re not alone! The most
interesting part of the talk were the 2 polls.
The first one asked, “Do you use an objective tool for prioritization of
incoming work?” Most responses were for No, but would be interested (47%);
next response was yes but we bypass (32%). Only about 20% have one, use it and
it’s effective
The second poll asked, “Do you collaborate with Clinical Stakeholders?” Most
responses were yes and split 2-1 between Yes and there’s tension to Yes and
we’re equal partners (which is where I think we’re at).
My Last talk of the day, **How Analytics Can Create a Culture of Continuous
Improvement**. It was an interesting talk that focused on using Analytics to
drive continuous improvement. Some of the things that really caught my
attention were the ideas of implementing continuous improvement is part of the
job description. Part of that was something that is stated in the New Employee
Orientation, “Do the job you were hired for and make it better.”
Another interesting point was that there is no one Big Bang solution for
Emergency Department throughput (though the idea can be applied to any problem
you’re facing). You need to look at improving each step a little bit along the
way.
But, in order to do this effectively, you need data, team and a process. This
reminded me of the **Breaking Down Barriers with Master Data Management and
Data Governance** talk I was at earlier in the day!
It was a great final day at HIMSS.
I’ve learned a ton at this conference and writing about it (like this) has
really helped to solidify some thoughts, and make me start asking questions.
I’ll need to remember to do this at my next conference.
",2018-03-08,himss-day-3,"One of the issues that any medium- to large-organization can encounter is how
to deal with requests that place a requirement of work from one department to
another. Specifically, requests for something shiny and new (especially
technology).
In the first educational session of the day, **Strategic Portfolio Management:
“Governing the …**
",HIMSS Day 3,https://www.ryancheley.com/2018/03/08/himss-day-3/
ryan,musings,"I've gone through all of my notes, reviewed all of the presentations and am
feeling really good about my experience at HIMSS.
Takeaways:
1. We need to get ADT enabled for the local hospitals
2. We need to have a governance system set up for a variety of things, including data, reporting, and IT based projects
Below are the educational sessions (in no particular order) I attended and my
impressions. Mostly a collection of _interesting_ facts (I've left the Calls
to Action for my to do list).
**Choosing the Right IT Projects to Deliver Strategic Value** presented by
[Tom Selva](https://www.linkedin.com/in/thomas-selva-49207351) and [Seth
Katz](https://www.linkedin.com/in/sethjeremykatz) they really hit home the
idea that there is a relationship between culture and governance. The culture
of the organization has to be ready to accept the accountability that will
come with governance. They also indicated that process is the most important
part of governance. Without process you **CANNOT** have governance.
In addition to great advice, they had great implementation strategies
including the idea of requiring all IT projects to have an elevator pitch and
a more formal 10 minute presentation on why the project should be done and in
what way it aligned with the strategy of the organization.
**Semantic data analysis for interoperability** presented by [Richard E.
Biehl, Ph.D.](http://iems.ucf.edu/mshse) showed me that there was an aspect of
data that I hadn't ever had to think about. What to do when multiple systems
are brought together and define the same word or concept in different ways.
Specifically,, ""Semantic challenge is the idea of a shared meaning or the data
that is shared"". The example on relating the concept of a migraine from ICD to
SNOMED and how they can result in mutually exclusive definitions of the same
'idea' was something I hadn't ever really considered before.
**Next Generation IT Governance: Fully-Integrated and Operationally-Led**
presented by [Ryan Bosch, MD, MBAEHS](https://www.linkedin.com/in/ryan-bosch-
md-46b921) and [Fran Turisco, MBA](https://www.linkedin.com/in/fran-
turisco-015096a) hit home the idea of **Begin with the End in mind**. If you
know where you're going it's much easier to know _how_ to get there. This is
something I've always instinctively felt, however, distilling it to this
short, easy to remember statement was really powerful for me.
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/206.pdf)
**Developing a “Need-Based” Population Management System** presented by Rick
Lang and [Tim Hediger](https://www.linkedin.com/in/tim-hediger-a1765) hammered
home the idea that ""Collaboration and Partnering are KEY to success"". Again,
something that I _know_ but it's always nice to hear it out loud.
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/124_0.pdf)
**Machine Intelligence for Reducing Clinical Variation** presented by [Todd
Stewart, MD](https://www.linkedin.com/in/rowland-todd-stewart-md-7a85b6b) and
[F.X. Campion, MD, FACP](https://www.linkedin.com/in/francis-campion-b3a8047)
was one of the more technical sessions I attended. They spoke about how
Artificial Intelligence and Machine Learning don't replace normal analysis,
but instead allow us to focus on what hypothesis we should test in the first
place. They also introduced the idea (to me anyway) that data has _shape_ and
that _shape_ can be analyzed to lead to insight. They also spoke about
'Topological Data Analysis' which is something I want to learn more about.
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/110.pdf)
**Driving Patient Engagement through mobile care management** presented by
[Susan Beaton](https://www.linkedin.com/in/susan-beaton-7848071b) spoke about
using _Health Coaches_ to help patients learn to implement parts of the care
plan. They also spoke about how ""Mobile engagement can lead to increased
feeling of control for members"" These are aspects that I'd like to see my
organization look to implement in the coming months / years
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/97_0.pdf)
**Expanding Real time notifications for care transitions** presented by
[Elaine Fontaine](https://www.linkedin.com/in/elaine-fontaine-3b68144) spoke
about using demographic data to determine the best discharge plan for the
patient. In one of the presentations I saw (Connecticut Hospitals Drive Policy
with Geospatial Analysis presented by Pat Charmel) the presenter had indicated
that as much as 60% of healthcare costs are determined by demographics. If we
can keep this in mind we can help control healthcare costs much more
effectively, but it lead me to ask:
* how much do we know
* how much can we know
* what aspects of privacy do we need to think about before embarking on such a path?
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/82_0.pdf)
**Your Turn: Data Quality and Integrity** which was more of an interactive
session when asked the question ""What would a National Patient Identifier be
useful for?"" most attendees in audience felt that it would help with
information sharing
**Predictive Analytics: A Foundation for Care Management** presented by
[Jessica Taylor, RN](https://www.linkedin.com/in/jessica-taylor-56039864) and
Amber Sloat, RN I saw that while California has been thinking about and
preparing for value based care for some time, the rest of the country is just
coming around to the idea. The hospital that these Nurses work for are doing
some very innovative things, but they're things that we've been doing for
years. The one thing they did seem to have that we don't is an active HIE that
helps to keep track of patients in near real time. I would love to have! One
of the benefits of a smaller state perhaps (they were from Maine)?
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/44.pdf)
**A model of data maturity to support predictive analytics** presented by
[Daniel O’Malley, MS](https://www.linkedin.com/in/daniel-o-malley-49995b8) was
full of lots of charts and diagrams on what the University of Virginia was
doing, but it was short on how they got there. I would have liked to have seen
more information on roadblocks that they encountered during each of the stages
of the maturity. That being said, because the presentation has the charts and
diagrams, I feel like I'll be able to get something out of the talk that will
help back at work.
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/19.pdf)
**Emerging Impacts on Artificial Intelligence on Healthcare IT** presented by
[James Golden, Ph.D.](https://www.linkedin.com/in/jigolden) and Christopher
Ross, MBA. They had a statistic that 30% of all data in the world is
healthcare data! That was simply amazing to me. They also had data showing
that medical knowledge doubles every THREE years. This means that between the
time you started medical school and the time you were a full fledged doctor
the amount of medical knowledge could have increased 4 to 8 fold! How can
anyone keep up with that kind of knowledge growth? The simple answer is that
they can't and that's why AI and ML are so important for medicine. But equally
important is how the AI/ML are trained.
[Link to HIMSS
Presentation](http://www.himssconference.org/sites/himssconference/files/pdf/300_0.pdf)
",2017-02-25,"himss-recap, Conferences","I've gone through all of my notes, reviewed all of the presentations and am
feeling really good about my experience at HIMSS.
Takeaways:
1. We need to get ADT enabled for the local hospitals
2. We need to have a governance system set up for a variety of things, including data, reporting …
",HIMSS Recap,"https://www.ryancheley.com/2017/02/25/himss-recap, Conferences/"
ryan,musings,"I had meant to do a write up of each day of my HIMSS experience, but time got
away from me, as did the time zone change, and here I am at the end of HIMSS
experience with only my day 0 notes down on _paper_.
Day 1 started with a rousing Keynote by Ginni Rometty, the CEO of IBM. The
things that struck me most about her keynote were here sense of optimism about
the future sprinkled with some caution about AI, Machine Learning and Big
Data. She reminded us that the computers that we are using for our analyst is
are tools to help, not replace, people and that it is incumbent upon us, the
leaders of HIT, to keep in the front of our minds how these BIG Data AI/ML
algorithms were trained. As the old saying goes, ""Garbage In, Garbage Out""
I also was able to record a bit of [her keynote
speech](https://www.dropbox.com/s/ou0kgdfnwyrxdsa/Ginni%20Rometty.m4a?dl=1)
just in case I need to find and listen to it later.
I tweeted a couple of times during the keynote (and even got some likes and
retweets ... not something I'm used to getting)
> `Transparency in the Era of Cognition with the help of @ibmwatson #himss17`
>
> `Artificial intelligence is out of its winter ... I sure hope so, but time
> will tell #himss17`
>
> `Integration in workflow is the key to adoption #himss17`
>
> `Don't let others define you. Great words from @GinniRometty #himss17`
>
> `Growth and comfort never coexist. Another great gem from @GinniRometty
> #himss17`
I spent almost all of my time on day 1 in educational sessions. One things
that I noticed from my first class was just how _FULL_ it was 15 minutes
before the session even started!
> `The Emerging Impacts of AI on HIT was full 15 minutes before the session
> started! Something tells me lots of ppl interested in AI #HIMSS17`
Sometimes the session title were a bit misleading, but eventually most of them
would come around. A class with a title of _Connecticut Hospitals Drive Policy
with Geospatial Analysis_ was more about the Connecticut Hospitals and less
about the Geospatial Analysis, but in the end I was what I was hoping to see
which was people using Geospatial Analysis to help identify, and perhaps risk
stratify patients to give the best care possible.
My tweet when the class was over:
> `Great talk on #geospatial analysis. So many ideas floating through my head
> now on potential actions and analysis #HIMSS17`
I ended my HIMSS 2017 experience on a high note with a great session titled
_Choosing the Right IT Projects To Deliver Strategic Value_. I'm still
processing everything that came out of that session, but it left me feeling
very positive about the future. It was nice to have the same, or at least very
similar, feeling of optimism at the end of HIMSS as I had at the beginning
after Mrs. Rometty's Keynote.
I'll be writing up my notes and linking to the presentations later this week
(maybe whilst I'm flying back home to California tomorrow).
This is a conference I am overwhelmed by but am glad I am coming to.
While it's fresh in my mind, strategies for next year:
* Pick 1 - 3 strategic challenges you want to solve. Then identify 10 - 20 vendors that can help solve that problem. Talk to them, schedule appointments with them. Get more information than you know what to do with
* Work on being a presenter. It will help check off that 'Speak in front of large groups of people' item on your _Bucket List_
",2017-02-23,himss-review,"I had meant to do a write up of each day of my HIMSS experience, but time got
away from me, as did the time zone change, and here I am at the end of HIMSS
experience with only my day 0 notes down on _paper_.
Day 1 started with …
",HIMSS review,https://www.ryancheley.com/2017/02/23/himss-review/
ryan,technology,"As I've been writing up my posts for the last couple of days I've been using
the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems)
[Text Editor](https://en.wikipedia.org/wiki/Text_editor)
[BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the
things that has been tripping me up though are my 'Windows' tendencies on the
keyboard. Specifically, my muscle memory of the use and behavior of the
`Home`, `End`, `PgUp` and `PgDn` keys. The default behavior for these keys in
BBEdit are not what I needed (nor wanted). I lived with it for a couple of
days figuring I'd get used to it and that would be that.
While driving home from work today I was listening to [ATP Episode
196](https://atp.fm/episodes/196) and their Post-Show discussion of the recent
departure of [Sal Soghoian](https://en.wikipedia.org/wiki/Sal_Soghoian) who
was the Project Manager for the macOS automation. I'm not sure why, but
suddenly it clicked with me that I could probably change the behavior of the
keys through the Preferences for the Keyboard (either system wide, or just in
the Application).
When I got home I fired up
[BBEdit](http://www.barebones.com/products/bbedit/index.html) and jumped into
the preferences and saw this:

I made a couple of changes, and now the keys that I use to navigate through
the text editor are now how I want them to be:

Nothing too fancy, or anything, but goodness, does it feel right to have the
keys work the way I need them to.
",2016-11-22,home-end-pgup-pgdn-bbedit-preferences,"As I've been writing up my posts for the last couple of days I've been using
the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems)
[Text Editor](https://en.wikipedia.org/wiki/Text_editor)
[BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the
things that has been tripping me up though are my 'Windows' tendencies on the
keyboard. Specifically, my muscle memory of the use and behavior of …
","Home, End, PgUp, PgDn ... BBEdit Preferences",https://www.ryancheley.com/2016/11/22/home-end-pgup-pgdn-bbedit-preferences/
ryan,musings,"I have been wanting to put shelves up in my office above my desk for some
time. The problem has been that the ones that are sold at Lowe’s or Home Depot
are not really what I wanted (too short) and I’m not a super handy guy with
building stuff (that’s more my dad and brother) so I’ve just been putting it
off. For an embarrassingly long time.
Last a couple of weekends ago my dad had volunteered to help me out in putting
up some shelves.
On Saturday at 8:30 we started. All in all the process went really, really
well. Only one extra trip to the hardware store (it’s usually about 3) and the
shelves were nice and level.
Since I wanted the shelves above my desk we needed to move it, and all of the
electronics that were on it, and plugged into the outlet behind it. This
included a UPS / Battery backup that all of my electronics were plugged into.
We moved everything away from the wall, and then I moved it back. No. Big.
Deal.
Now, the timing may have just been coincidental, but the next morning I needed
to do some work for my job-y job from home. I took my laptop into my office
(with the brand new shelves) and plugged it into the UPS.
I noticed the lights flicker and discovered that the WiFi router (my trusty
AirPort Extreme) seem to have reset itself.
No big deal. I just rebooted and we were all good.
Later that day I plugged in my iMac and then stuff got real. The lights went
out. I figured that the breaker tripped, but the sprinklers next to the
breaker were on so I waded out through to the box and turned the breaker back
on. Or so I thought. I came back in and the lights were still off.
At this point I freaked out because, well, that’s kind of what I do. I went
back out and turned the breaker off and then back on. Lights are back.
OK, lets try this again. I plug the iMac back in and ... crap. Lights are off
again.
Back to the breaker (at this point the sprinklers are off) so off and on the
breaker went.
OK, one last time and ... mother f!
Somehow I was able to go from being able to have my UPS plugged in and
everything being fine, to not.
OK. Swap out the UPS and put back the Surge Protectors. Everything powers on
and we’re good.
Except we’re not. The light on my AirPort Extreme is suddenly not a solid
green, but instead a flashing amber. I consult the
[internet](https://support.apple.com/en-us/HT202211#amber ""About the status
light on AirPort base stations"") and get a very unhelpful message
> These are some typical reasons for the status light to flash amber:
>
> The base station hasn't been set up, or it was reset and needs to be set up
> again. Use AirPort Utility to set up your base station.
>
> A firmware update is available for the base station.
>
> The base station is set up to use Back to My Mac, but Back to My Mac isn't
> working or the password is incorrect. If you've upgraded to macOS Mojave,
> you should remove the base station from your Back to My Mac network, because
> Mojave doesn't support Back to My Mac.
>
> The base station can't connect to the Internet, such as when Internet
> service is down at your location, the base station can't acquire an IP
> address from your primary router, or the WAN Ethernet connection to your
> router isn't working.
>
> The base station is set up to wirelessly extend the range of your network,
> but is too far away from the primary Wi-Fi base station.
>
> If your base station is an AirPort Time Capsule, its internal hard disk is
> experiencing an issue that requires repair.
And suddenly my entire WiFi is down. And I am sad.
I tried a ton of things to get the AirPort Extreme Back, but nothing is
working. I finally throw in the towel and decide to to use the WiFi access
point from my Fios router.
This means that I have to update the WiFi on:
* 3 iPhones
* 2 iPads
* 1 MacBook
* 2 MacBookPros
* 1 iMac
* 2 Wemo Switches
* 2 Raspberry Pi
* 3 Apple TVs (2 4th Gen and 1 3rd Gen)
* 1 WiFi connected Scale
* 1 Ring Doorbell
* 1 Ring Chime (connected to Ring Doorbell)
It also means that I need to plug my Netgear switch into my Fios router
instead of the AirPort Extreme. No big deal, right? Except that it was because
I forget that the port that the Cat5 cable is plugged into on a router is
important.
I spent an embarrassingly long time trying to figure out why my Sonos and Hue
Lights weren’t on my network.
Emily kept telling me to take a break and relax and that was, in that moment,
the last thing I wanted to do.
I was able to get all of the iOS and MacOS devices connected back to the
internet (via WiFi) and decided that I needed to forget the network and watch
game 5 of the World Series.
By the end of the 7th we had the game off and were catching up on CW Comic
Book shows.
It was a rough day. But I learned a couple of things:
1. LAN Port 1 on the Fios Router is the right port
2. Sometimes, you just need to take a step back and think instead of just react
3. I have a crap ton of WiFi devices
I'm still working on trying to get the AirPort Extreme back to working so that
I don't need to get a new WiFi router ( have I mentioned how awful the Fios
one is? ).
",2018-11-05,hosing-my-wifi-set-up,"I have been wanting to put shelves up in my office above my desk for some
time. The problem has been that the ones that are sold at Lowe’s or Home Depot
are not really what I wanted (too short) and I’m not a super handy guy with …
",Hosing my WiFi set up,https://www.ryancheley.com/2018/11/05/hosing-my-wifi-set-up/
ryan,technology,"I created a Django site to troll my cousin Barry who is a big [San Diego
Padres](https://www.mlb.com/padres ""San Diego Padres"") fan. Their Shortstop is
a guy called [Fernando Tatis Jr.](https://www.baseball-
reference.com/players/t/tatisfe02.shtml ""Fernando “Error Maker” Tatis Jr."")
and he’s really good. Like **really** good. He’s also young, and arrogant, and
is everything an old dude like me doesn’t like about the ‘new generation’ of
ball players that are changing the way the game is played.
In all honesty though, it’s fun to watch him play (anyone but the Dodgers).
The thing about him though, is that while he’s really good at the plate, he’s
less good at playing defense. He currently leads the league in errors. Not
just for all shortstops, but for ALL players!
Anyway, back to the point. I made this Django site call [Does Tatis Jr Have an
Error Today?](https://www.doestatisjrhaveanerrortoday.com ""Not Yet"")It is a
simple site that only does one thing ... tells you if Tatis Jr has made an
error today. If he hasn’t, then it says `No`, and if he has, then it says
`Yes`.
It’s a dumb site that doesn’t do anything else. At all.
But, what it did do was lead me down a path to answer the question, “How does
my site connect to the internet anyway?”
Seems like a simple enough question to answer, and it is, but it wasn’t really
what I thought when I started.
## How it works
I use a MacBook Pro to work on the code. I then deploy it to a Digital Ocean
server using GitHub Actions. But they say, a picture is worth a thousand
words, so here's a chart of the workflow:

This shows the development cycle, but that doesn’t answer the question, how
does the site connect to the internet!
How is it that when I go to the site, I see anything? I thought I understood
it, and when I tried to actually draw it out, turns out I didn't!
After a bit of Googling, I found [this](https://serverfault.com/a/331263 ""How
does Gunicorn interact with NgInx?"") and it helped me to create this:

My site runs on an Ubuntu 18.04 server using Nginx as proxy server. Nginx
determines if the request is for a static asset (a css file for example) or
dynamic one (something served up by the Django App, like answering if Tatis
Jr. has an error today).
If the request is static, then Nginx just gets the static data and server it.
If it’s dynamic data it hands off the request to Gunicorn which then interacts
with the Django App.
So, what actually handles the HTTP request? From the [serverfault.com answer
above](https://serverfault.com/a/331263):
> [T]he simple answer is Gunicorn. The complete answer is both Nginx and
> Gunicorn handle the request. Basically, Nginx will receive the request and
> if it's a dynamic request (generally based on URL patterns) then it will
> give that request to Gunicorn, which will process it, and then return a
> response to Nginx which then forwards the response back to the original
> client.
In my head, I thought that Nginx was ONLY there to handle the static requests
(and it is) but I wasn’t clean on how dynamic requests were handled ... but
drawing this out really made me stop and ask, “Wait, how DOES that actually
work?”
Now I know, and hopefully you do to!
## Notes:
These diagrams are generated using the amazing library
[Diagrams](https://github.com/mingrammer/diagrams ""Diagrams""). The code used
to generate them is
[here](https://github.com/ryancheley/tatis/blob/main/generate_diagram.py).
",2021-05-31,how-does-my-django-site-connect-to-the-internet-anyway,"I created a Django site to troll my cousin Barry who is a big [San Diego
Padres](https://www.mlb.com/padres ""San Diego Padres"") fan. Their Shortstop is
a guy called [Fernando Tatis Jr.](https://www.baseball-
reference.com/players/t/tatisfe02.shtml ""Fernando “Error Maker” Tatis Jr."")
and he’s really good. Like **really** good. He’s also young, and arrogant, and
is everything an old dude like me doesn …
",How does my Django site connect to the internet anyway?,https://www.ryancheley.com/2021/05/31/how-does-my-django-site-connect-to-the-internet-anyway/
ryan,musings,"As technical folks working with non-technical folks sometimes the asks that
come through are unclear. In order to get clarity on these we want to ask
questions to get clarification on the ask, but it can be challenging to not
sound like a jerk when we ask. This can happen even IF we do our best to come
across in a positive way.
When trying to ask for more details on a project or request I find it's
usually best to get to the source of the issue. I like to ask, ""What problem
are we trying to solve here?"" or something similar.
This helps to put you and the requester on 'the same team' trying to 'solve
the problem' and not in a potentially negative 'why are you asking me this
stupid question' sort of light.
I can't say that I have 'one weird trick' that will always make this not a
problem, but recently at my $dayJob I had an experience that might be helpful
in seeing how to navigate this particular process.
## The problem
I received an email that went something like this
> Please see below. It seems that delivery of paper reports via courrier could
> be automated by sending them to a portal. What are your thoughts?
My initial thought was, ""Yes, if we could automate these reports and send them
electronically to a portal that would be more efficient.""
However, there are some deeper questions here that need to be asked ... like:
1. Why are we sending these reports in the first place?
Just asking this question though puts us into a potential state of conflict,
i.e. it's similar to sounding like you're asking, ""why would you do this
stupid thing"". In order to avoid this I reframed the question into 3 deeper
questions that tried to frame 'the problem' and put me and the requester 'on
the same team' to 'solve the problem'
1. What are the reports?
2. What are the recipients of the reports supposed to do with them?
3. Do the recipients of the reports find them helpful, or do they just put them in the shred bin?
My first response to the sender was
> Ideally any reports that are being delivered on printed paper by courrier
> would be better served to be delivered via some electronic means. Can you
> tell me, what are these reports and who are the intended recpients?
I wanted to explicitly ask who the intended recipients were (I work in
Healthcare and these reports are 'for the doctors' but they might actually be
getting delivered to an office manager, a front desk person, or anyone other
than the doctor).
The sender responded back
> They are reports that show a key metric for outstanding work left to do for
> a specific population of their membership. Each doctor (or their office) are
> free to do, or not do, anything with the information in these reports.
Next I asked if the recipients had been surveyed on the usefulness of the
reports and that's when the sender indicated:
> Actually, no. It's something that we need to do so that we can potentially
> consilidate reports and/or eliminate unhelpful reports.
## The Solution
At the end we decided that before anywork was done to 'automate' the delivery
of these reports, that we really needed to address the contents of the reports
and determine which parts of them were helpful, and what parts weren't. Once
we have a single report, or potentially a suite of reports, the automation and
delivery work could actually start.
By working through and trying to determine the actual problem that needed to
be solved by asking questions to help both me and the requester better
understand what the real ask was, we saved a ton of development time and have
a better path forward for making the information we have more relevant and
actionable by the doctors' offices.
Will this work in every situation? Maybe not, but I believe it's a good
starting point when trying to solve 'real world' problems in a work setting.
Tech folks have a (sometimes deserved) bad wrap, but we can shed this negative
impression by showing the people that request solutions from us that we're
both working towards the same goal of solving the problem.
",2024-08-22,how-to-ask-why-without-sounding-like-a-jerk,"As technical folks working with non-technical folks sometimes the asks that
come through are unclear. In order to get clarity on these we want to ask
questions to get clarification on the ask, but it can be challenging to not
sound like a jerk when we ask. This can happen …
",How to ask why without sounding like a jerk,https://www.ryancheley.com/2024/08/22/how-to-ask-why-without-sounding-like-a-jerk/
ryan,microblog,"Life is full of decisions. Some of them are easy, like what will I have for
breakfast on a weekday? The answer is peanut butter toast with blueberry
preserves, obviously!
Some of them less so. When faced with a decision that is hard there a lots of
strategies to help in making it. You can make pros and cons lists. You can do
worst case scenarios. You can do any one of a number of things to help to get
you to the decision, which is helpful. But what these tricks can never do is
tell you if you're making the right decision.
When faced with a difficult decision, the reason that it's difficult is that
the choice you make isn't clearly going to the right one or the wrong one.
It's just going to be a decision you made. Only time will help you find out if
it was the right one or not.
I'm really focusing on making hard choices recently. Lots of things in life
right now are putting choices in front of me. The choices are hard to make,
and the goodness of the decisions may not be known for weeks, or months. All I
know is that a decision needs to be made, because not making a decision is a
decision in and of itself.
Here's to the tough choices and the decisions that come along with them.
",2025-02-10,how-to-make-a-hard-decision,"Life is full of decisions. Some of them are easy, like what will I have for
breakfast on a weekday? The answer is peanut butter toast with blueberry
preserves, obviously!
Some of them less so. When faced with a decision that is hard there a lots of
strategies to help …
",How to Make a Hard Decision,https://www.ryancheley.com/2025/02/10/how-to-make-a-hard-decision/
ryan,musings,"I’ve been thinking a bit about how to decide which team to root for. Mostly I
just want to stay logically consistent with the way I choose to root for a
team (when the Dodgers aren't playing obviously).
After much thought (and sketches on my iPad) I’ve come up with this table to
help me determine who to root for:
* * *
Opp1 / Opp 2 NL West NL Central NL East AL West AL Central AL East **NL West**
Root for team that helps the Dodgers NL Central Team NL East Team NL West
Team,unless it hurts the Dodgers NL West Team,unless it hurts the Dodgers NL
West Team,unless it hurts the Dodgers **NL Central** NL Central Team Root for
underdog NL Central Team NL Central Team NL Central Team NL Central Team **NL
East** NL East Team NL Central Team Root for underdog NL East Team NL East
Team NL East Team **AL West** NL West Team,unless it hurts the Dodgers NL
Central Team NL East Team The Angels over the A's over the Mariners over the
Rangers over the Astros AL West Team AL West Team **AL Central** NL West
Team,unless it hurts the Dodgers NL Central Team NL East Team AL West Team
Root for underdog AL Central Team **AL East** NL West Team,unless it hurts the
Dodgers NL Central Team NL East Team AL West Team AL Central Team Root for
underdog (unless it's the Yankees)
* * *
The basic rule is root for the team that helps the Dodgers payoff changes,
then National League over American League and finally West over Central over
East (from a division perspective).
There were a couple of cool sketches I made, on real paper and my iPad. Turns
out, sometimes you really need to think about thing before you write it down
and commit to it.
Of course, this is all subject to change depending on the impact any game
would have on the Dodgers.
",2018-04-02,how-to-pick-a-team-to-root-for-when-the-dodgers-arent-playing,"I’ve been thinking a bit about how to decide which team to root for. Mostly I
just want to stay logically consistent with the way I choose to root for a
team (when the Dodgers aren't playing obviously).
After much thought (and sketches on my iPad) I’ve come …
",How to pick a team to root for (when the Dodgers aren’t playing),https://www.ryancheley.com/2018/04/02/how-to-pick-a-team-to-root-for-when-the-dodgers-arent-playing/
ryan,professional development,"Hi, welcome to the team. I'm so glad you are here at \$COMPANY.
It's going to take a solid 90 days to figure this place out. I understand the
importance of first impressions, and I know you want to get a check in the win
column, but this is a complex place full of equally complex humans. Take your
time, meet everyone, write things down, and ask all the questions - especially
about all those baffling acronyms … healthcare is full of them
One of the working relationships we need to define is ours. The following is a
user guide for me and how I work. It captures what you can expect out of the
average week, how I like to work, my north star principles, and some of my,
uh, idiosyncrasies. My intent is to accelerate our working relationship with
this document.
## Our Average Week
We'll have a 1:1 every week for about 30 minutes. I try to never cancel this
meeting so it might get moved around a bit. I would like to apologize for this
in advance.
If you are curious about the 1:1s I have with my manager I’m more than happy
to tell you about their frequency and duration. I meet with my boss at least
once a week for anywhere from 30 - 90 minutes. It just depends on the week.
The purpose of our meeting is to discusses topics of substance, not updates
(there are other platforms for that). Sometimes they can morph into update
type meetings. I’ll do my best to keep that from happening, and I ask that you
do the same. I have a running list of items that I will want to discuss with
you and I encourage you do have the same.
We have scrum every day. The purpose of the scrum is to tell the **team**
three things:
1. What I did yesterday
2. What I’m doing today
3. What, if any, roadblocks I have
The scrum master will make note of the roadblocks and work to remove them as
quickly as possible. Sometimes this is fast, sometimes it’s not.
If I am traveling or will be out of the office on PTO (yes, I take PTO and you
should too once you can), I will give you notice of said travel in advance.
Depending on the type of travel I may need to cancel our meeting.
Sometimes I work on the weekends. Sometime I work late. Unless we have a big
project that you are working on and it needs to get done I don’t ask anyone
else to work late or on the weekends. I want you to have a life outside of
work.
## North Star Principles
**Humans first.** I believe that happy, informed, and productive humans build
fantastic products. I try to optimize for the humans. Other leaders will
maximize the business, the technology, or any other number of important
facets. Ideological diversity is key to an effective team. All perspectives
are relevant, and we need all these leaders, but my bias is towards building
productive humans.
**Leadership comes from everywhere.** My wife likes to remind me that I hated
meetings for the first ten years of my professional career. She's right. I've
wasted a lot of time in poorly run meetings by bad managers. I remain
skeptical of managers even as a manager. While I believe managers are an
essential part of a scaling organization, I don't believe they have a monopoly
on leadership, and I work hard to build other constructs and opportunities in
our teams for non-managers to lead.
**It is important to me that humans are treated fairly.** I believe that most
humans are trying to to do the right thing, but unconscious bias leads them
astray. I work hard to understand and address my biases because I understand
their ability to create inequity. I am not perfect, but I try to be better
today than I was yesterday. Sometimes I succeed. Sometimes I don’t.
**I heavily bias towards action.** Long meetings where we are endlessly
debating potential directions are often valuable, but I believe starting is
the best way to begin learning and make progress. This is not always the
correct strategy. This strategy annoys those who like to debate.
**I believe in the compounding awesomeness of continually fixing small
things.** I believe quality assurance is everyone's responsibility and there
are bugs to be fixed everywhere… all the time.
**I start with an assumption of positive intent for all involved.** This has
worked out well for me over my career.
## Feedback Protocol
I firmly believe that feedback is at the core of building trust and respect in
a team.
At \$COMPANY, there is a formal feedback cycle which occurs once a year per
employee.
During that formal feedback cycle (also called the Annual Review) we will
discuss the previous year. There’s a form (\$COMPANY **loves** forms). I’ll
fill it out and we’ll discuss it.
This means that at anyone time I could be finishing up 5 reviews or 1.
Notice I say finishing up. I try to make the reviews I write as living
documents so I can capture everything from the year, and not just everything
from the last month.
If during the Annual Review you are surprised (positively or negatively) by
anything, I have not done my job. Please let me know. Feedback is the only way
we know we are doing something well, or not well.
I won’t assume you know what I’m thinking, and I ask that you don’t assume I
know what you’re thinking.
Disagreement is feedback and the sooner we learn how to efficiently disagree
with each other, the sooner we'll trust and respect each other more. Ideas
don't get better with agreement.
## Meeting Protocol
I go to a lot of meetings. In the morning scrum many times I will indicate
that today I have several meetings. I don’t enumerate all of them because I
don’t think everyone wants to know specifically which meetings I’m going to.
If I think it’s important for the team to know, I will say, I have meeting X
today. If I don’t indicate what meeting I have and you want to know, ask. If
it’s not private / confidential I will tell you.
My definition of a meeting includes an agenda and/or intended purpose, the
appropriate amount of productive attendees, and a responsible party running
the meeting to a schedule. If I am attending a meeting, I'd prefer starting on
time. If I am running a meeting, I will start that meeting on time.
If a meeting completes its intended purpose before it's scheduled to end,
let's give the time back to everyone. If it's clear the intended goal won't be
achieved in the allotted time, let's stop the meeting before time is up and
determine how to finish the meeting later.
## Nuance and Errata
**I am an introvert** and that means that prolonged exposure to humans is
exhausting for me. Weird, huh? I tend to be most active when I’m not running
the meeting and there are fewer people. If I’m not running the meeting and
there are many people I am strangely quiet. Do not confuse my quiet with lack
of engagement.
**When I ask you to do something that feels poorly defined** you should ask me
for both clarification and a call on importance. I might still be
brainstorming. These questions can save everyone a lot of time.
**I tend to be very reserved** but this is not a sign that I am uninterested,
it is just who I am. Every once in a while that reserved facade is cracked and
I display emotions. That’s when you can tell I’m really excited about a thing
(either good or bad).
**During meetings in my office** I will put my phone on DND and log out of my
computer if we won’t be using it. If we will be using my computer I close
Outlook and only have the applications open that need to be open. During
meetings I will take notes on my phone. I have a series of actions programmed
on my iPhone to help keep me on top of things that I need to do. Rest assured,
I’m not texting anyone, or checking the next available movie time. When I am
done typing a note, I will put the phone down.
**During meetings over Zoom, Slack, etc.** I will put all communication apps
on DND and close Outlook. Some people like to use the camera during meetings.
Others don't. I am good either way. During team **only** meetings I do like
that everyone has the camera on. I will typically use my iPad to take notes
during meetings. As stated above, I have many workflows built into my phone
and the use of my iPad helps to keep things straight for me. Rest assured, I'm
not checking the score of the big game.
**Humans stating opinions as facts** are a trigger for me.
**Humans who gossip** are a trigger for me.
**I am not writing about you.** I've been writing a blog (off an on) for a
long time and continue to write. While the topics might spring from recent
events, the humans involved in the writing are always made up. I am not
writing about you. I try to write all the time.
**This document is a[living breathing
thing](https://github.com/randsleadershipslack/documents-and-
resources/blob/master/howtorands.md)** and likely incomplete. I will update it
frequently and would appreciate your feedback.
* Original Date: June 15, 2018
* Updated: March 20, 2021
",2018-06-15,how-to-ryan,"Hi, welcome to the team. I'm so glad you are here at \$COMPANY.
It's going to take a solid 90 days to figure this place out. I understand the
importance of first impressions, and I know you want to get a check in the win
column, but this is a …
",How to Ryan,https://www.ryancheley.com/2018/06/15/how-to-ryan/
ryan,musings,"## Game Structure
Hockey has some stuff in common with live theater. No ... really! 😁
They both have dressing rooms and they both have intermission ... but that is
probably where the similarities end.
Each hockey game is split into three 20 minute periods. There is an
intermission between each period that lasts 18 minutes. During the
intermission the players go back to the dressing room to regroup and chat
about the previous period a strategize for the upcoming period.
Out in the arena there are chances for you to get overpriced refreshments,
stand in long lines to use the facilities, or just stay in your seat and watch
the silly intermission games.
Some examples I've seen of silly intermission games are Fuego Pong (like
quarters, but with soccer balls and large 5 gallon buckets), ice bowling where
a player is put into a giant slingshot on the ice and hudled towards
inflatable bowling pins, and the dress up game.
It's also during this time that the ice is resurfaced by a
[Zamboni](https://en.m.wikipedia.org/wiki/Ice_resurfacer) to make it nice and
clean for the next period.
If at the end of the third period the game is tied then you're in luck because
you get free hockey, also known as Overtime. One thing to keep in mind is that
the overtime rules during a regular season game are different than a
postseason game.
### Regular Season Overtime Rules
At the end of the third period there is a 1 minute 'intermission' and then a 5
minute overtime period starts. The overtime period will feature 3 skaters from
each team as well as their goalie.
If a penalty occurs in Overtime (or is carried over from the third period) the
period starts with four players on the power play team and 3 on the short
handed team.1
Each team tries to score a goal first. If they do, then they win in overtime.
If, at the end of 5 minutes of play, the score is still tied then a shootout
happens.
In the shootout each team has 3 chances to score a [penalty
shot](https://en.wikipedia.org/wiki/Penalty_shot_\(ice_hockey\)). Essentially
a skater from each team has the opportunity to try and score a goal with only
the goalie trying to prevent it. If at the end of the three rounds we're still
tied, we keep sending out skaters to try and get that penalty shot until one
team is victorious. The record for most rounds of a shoot out is [20
rounds](https://youtu.be/oH79V8zcMKk?si=pZYQ0ANCpsPrt-5z) in the NHL, and 16
rounds in the AHL.
### Postseason Overtime Rules
Postseason overtime rules are a bit different. Basically you just keep adding
20 minute periods until someone scores. Once a team scores they have won that
game. The longest overtime in NHL Postseason history went into the 6th
overtime and was [played in 1936](https://records.nhl.com/records/playoff-
team-records/overtime/longest-overtime-playoff) between the Detroit Red Wings
and the Montreal Maroons. The longest AHL overtime was between the [Charlotte
Checkers and the Lehigh Valley
Phantoms](https://www.phantomshockey.com/timeline-relive-longest-game-ahl-
history/#:~:text=The%20game%2C%20which%20took%20place,series%20lead%20over%20the%20Checkers)
which went into a 5th overtime period. This game started at 7:03 pm local and
didn't finish until almost 3:00 am local the next day!
In general most hockey games don't get past the first OT period. From The 2006
playoffs through to the 2024 playoffs there have only been 52 games that have
gone into a second overtime period (out of [1312](https://ahl-
data.ryancheley.com/games?sql=select%0D%0A++g.game_status%0D%0A++%2C+min%28g.game_date%29%0D%0A++%2C+count%28%2A%29%0D%0Afrom%0D%0A++games+g%0D%0Ainner+join+dim_date+as+d+on+g.game_date+%3D+d.date%0D%0Awhere+d.season_phase+%3D+%27post%27%0D%0Agroup+by+g.game_status%0D%0Aorder+by+g.game_status&_hide_sql=1)).
OK, you've got a few basics 'under your belt'. In the next part I'll try and
answer the question, 'What should I watch?'.
1. essentially it would be a short Overtime period and probably pretty boring ↩︎
",2025-01-29,how-to-watch-a-hockey-game-game-play,"## Game Structure
Hockey has some stuff in common with live theater. No ... really! 😁
They both have dressing rooms and they both have intermission ... but that is
probably where the similarities end.
Each hockey game is split into three 20 minute periods. There is an
intermission between each period that lasts …
",How to Watch a Hockey Game - Game Play,https://www.ryancheley.com/2025/01/29/how-to-watch-a-hockey-game-game-play/