author,category,content,published_date,slug,summary,title,url ryan,management,"In every organization, three critical elements determine success: People, Processes, and Priorities. While all are essential, their ranking matters profoundly. Based on my experience across several organizations, I've found that Processes must come first, followed by People, with Priorities anchored firmly at the foundation. This deliberate ordering—Processes at the top, People in the middle, and Priorities as bedrock—creates the most stable and effective organizational structure. When Processes guide how People work and how Priorities are determined, organizations can avoid the chaos of constant priority shifts, reduce dependency on specific individuals, and create consistent frameworks for decision-making. ## Defining Terms Let's define what each of these mean from an organizational perspective: 1. Processes - How to solve the problems 2. People - Who will solve the problems 3. Priorities - The order in which to solve the problems ## Process In my experience ranking Priorities first leads to lots of changes to Priorities. This week it's shipping a new feature to make all of the buttons cornflower blue ... next week it's adding AI to the application. The week after that it's to mine bitcoin. Priorities shift, and that's OK, but priority driven organizations seem to not have a true defining north star to help guide them, which in my experience that leads to chaos. Ranking People first sounds like a good idea. I mean, who doesn't want to put People first? I have found however that when People are prioritized first bad things can happen. Cliques can form. Only Sally can do thing X and they're out for the next three weeks and no, there isn't any documentation on how to do that. Management can be lax because that's just Bob being Bob and can lead to toxic work environments. I think that putting Process first helps to mitigate, though not outright eliminate, these concerns. Processes help to determine how we do thing **X**. If Sally is out, that's OK because we have a _Process_ and documentation to help us through it. Will we get it done as quickly as Sally would have gotten it done? No, but we will get it done before they come back. Processes also help implement things like Codes of Conduct. Again, that won't prevent cliques from forming, and no it won't keep Bob from being a jerk, but it creates a framework to help deal with Bob being a jerk and potentially removing them from the situation entirely. Processes can also help with prioritization. Having a Process that helps to guide HOW you prioritize can be very helpful. This doesn't prevent you from switching up your Priorities, but it does help to keep you focused on something long enough to complete it. And when you need to change a priority it's a lot easier (and healthier) to be able to point to the Process that drove the deicsion to change versus a statement like, ""I don't know, the CEO saw something on Bloomberg and now we're doing this."" Setting up Processes is hard. And in a small environments it can seem like it's not worth it. For example, asking ""Why do we have a 17 page document that talks about how Priorities are chosen if it's just a handful of People?"" Yes, that IS hard. And it might not seem like it's worth it. But you don't need a big long document to determine a Process on how to change Priorities. It can be as simple as > We are small and acknowledge that change is required. We will only change > when a consensus of 60% of the team agree with the change OR if the CEO and > CFO agree on the change. More complicated Processes can come later. But at least now when a change is needed you know HOW you're going to talk about that change! ## People What comes second? I find that People should be next. It's the People that are going to help make everything happen. It's the People that are going to help get you over the finish line of the projects that are driven by your Processes. It's People that will work the Processes. Once you have good Processes and good People, then you can really start to set Priorities that EVERYONE will understand. ### An Example My least favorite answer to the question, ""Why do we do it this way?"" is ""I don't know."" In my opinion this points to a broken culture. It could be that when you started you did ask questions, but you were shot down so many time for asking that you just stopped asking. It could be that you're not very curious and someone just told you and didn't provide a reason and you just accepted it as gospel that this is the way that it needs to be done. The reason why this is a toxic trait is that you can have a situation like this occur While working on a report a requester indicated that the margins weren't quite right and it was VERY important that they be 'just so'. I met with the requester and asked them about the Process and it went something like this: [![](https://mermaid.ink/img/pako:eNpVkMtqwzAQRX9FzNoOjvxqvCgUSqCLQKGr1OpiGo1iU1kKiozjhvx75aRpk4Vg7pmDdNERNlYSVKC0HTYNOi_MTtWvrjWeLa3rPoRRrfm0h3pKrDXsZUpnrEkFtqqXYWLX9coa3zBltSR3Y63vrTWh-5e8w31TP3lGRjKr2DhtfePswHDA8ZL_7Kkhi2OJrR7j-JFd- gUyEH1d0W-3QLup0D1eB4zG9Kgv_Py- MBBBR67DVoYPOQrDmADfUEcCqjBKUthrL0CYU1Cx9_ZtNBuovOspgn4n0dNzi1uH3RXu0LxbexuhOsIBKs7TGedZXmQ8S_IkS8oIxoDLWZHMebFYhFOWGT9F8H2-IJk95ClP87RIF0XCi2IegbP9toFKod7T6Qc7uJk4?type=png)](https://mermaid.live/edit#pako:eNpVkMtqwzAQRX9FzNoOjvxqvCgUSqCLQKGr1OpiGo1iU1kKiozjhvx75aRpk4Vg7pmDdNERNlYSVKC0HTYNOi_MTtWvrjWeLa3rPoRRrfm0h3pKrDXsZUpnrEkFtqqXYWLX9coa3zBltSR3Y63vrTWh-5e8w31TP3lGRjKr2DhtfePswHDA8ZL_7Kkhi2OJrR7j-JFd- gUyEH1d0W-3QLup0D1eB4zG9Kgv_Py- MBBBR67DVoYPOQrDmADfUEcCqjBKUthrL0CYU1Cx9_ZtNBuovOspgn4n0dNzi1uH3RXu0LxbexuhOsIBKs7TGedZXmQ8S_IkS8oIxoDLWZHMebFYhFOWGT9F8H2-IJk95ClP87RIF0XCi2IegbP9toFKod7T6Qc7uJk4) When I drew out the flow and asked the requester why, they said, ""I don't know, that's just how Tim trained me"" I was fortunate that Tim was still at the company, so I called him and asked about the Process. He laughed and said something to the effect of, ""They're still doing that? I only had that in place because of an issue with a fax machine 8 years ago but IT fixed it. Why are they still doing it that way?"" ""Because that's how they were trained"" 🤦🏻‍♂️ Always understand why you're doing a thing. Always. This points to the need for Process, and why I place it first. Process matters and it helps to inform the People what they need to do. ## Priorities Why are Priorities last? How can something as important as Priorities be last? I would argue that Priorities should be the bedrock of you organization and they should be HARD to change. Constantly shifting Priorities leads to dissatisfaction, and burnout. It can also lead People to wonder if what they do actually matters. If it's always changing, why should I care about what I'm working on right now if it's just going to be different later today, tomorrow, or next week. The interplay between Processes, People, and Priorities forms the backbone of any effective organization. By putting Processes first, we create the infrastructure that enables People to thrive and Priorities to remain stable. Good Processes provide clarity, continuity, and a framework for decision- making that transcends individual preferences or momentary urgencies. When organizations understand that Priorities should be difficult to change—and that a clear Process should govern how and when they change—they protect their teams from the whiplash of constant redirection. This stability doesn't mean rigidity; rather, it ensures that when change does occur, it happens deliberately, transparently, and with organizational buy-in. Whether you're leading a startup of five People or managing departments within a large corporation, begin by examining your Processes. Are they documented? Do People understand not just what to do, but why? Is there a clear Process for establishing and modifying Priorities? If you can answer ""yes"" to these questions, you've laid the groundwork for an organization where People can contribute meaningfully to Priorities that truly matter. Remember: Process first, People second, and Priorities as the bedrock. Get this order right, and you'll build an organization that can handle change without losing its way. ",2025-03-09,Process-People-and-Priorities,"In every organization, three critical elements determine success: People, Processes, and Priorities. While all are essential, their ranking matters profoundly. Based on my experience across several organizations, I've found that Processes must come first, followed by People, with Priorities anchored firmly at the foundation. This deliberate ordering—Processes at the … ","Process, People, and Priorities",https://www.ryancheley.com/2025/03/09/Process-People-and-Priorities/ ryan,musings,"The [Tableau Conference](https://tc19.tableau.com) was held at the Mandalay Bay Convention Center this year (and will be again next year in 2020). I had the opportunity to attend (several weeks ago) and decided to write up my thoughts about it. This is an introverted newbie’s guide navigating the conference. The conference started on Tuesday with pre-conference sessions that you had to register (and pay for). I did not attend those. Tuesday night there was a big welcome reception that I very nearly bailed on because of how many people there were, but I decided to give it a shot anyway. I’m glad I did. The welcome reception (as well as all of the meals) were held in the data village (basically the convention show floor) which was a little weird but it worked. In the reception they had industry specific areas (healthcare being one of them). I didn’t know this going in ... I just kind of stumbled into it. This was the luckiest break I could have had as I sat there there entire night and met about 10 people. Three of them (Josh, Kerry, and Molly) I spoke to the most, so much so that we decided that we’d go to the ' Data Night Out’ (the client party) together. Being super introverted this was not my jam, but I’m glad I went, and I will go again next year. Each day is jam packed full of sessions. I didn’t come across any sessions that were not worthwhile, although some were better than others. You do have to register for the session in order to gain admittance to the room (they scan your badge to make sure you belong) but there seemed to be stand by room in most of the sessions I attended. ## Keynote events There are ‘Key Note’ events to kick off each day. They happen in the Mandalay Bay events center, but there is also an overflow room you can watch them from. I would recommend going to at least one event in the events center, but as an introvert the overflow was really more my speed. A room that could sit 500 people with only 50 in it ... yes please! ## Iron Viz A take on Iron Chef, Iron Viz was a chance for 3 Tableau wizards to showcase their skills with Tableau and a shared data set. It was really interesting to see the different ways that the data could be presented and the different stories that each competitor told for their visualizations. ## Data Night Out I didn’t do this, mostly because by Thursday I was pretty overwhelmed and just needed a quite night in. I don’t regret not going, but I think I will make myself go next year ## Data Culture I’m going to write more on this once I get my head really wrapped around it, but suffice it to say, this is something that I think is going to be very important going forward for the organization I work for. ",2019-12-17,a-beginners-guide-to-tableau-conference-2019-edition,"The [Tableau Conference](https://tc19.tableau.com) was held at the Mandalay Bay Convention Center this year (and will be again next year in 2020). I had the opportunity to attend (several weeks ago) and decided to write up my thoughts about it. This is an introverted newbie’s guide navigating the conference. The … ",A beginners guide to Tableau Conference - 2019 edition,https://www.ryancheley.com/2019/12/17/a-beginners-guide-to-tableau-conference-2019-edition/ ryan,musings,"One of the earliest memories of my grandmother is visiting her in 29 Palms 1 2 in her permanent mobile home. I remember sitting on the davenport watching the Dodgers on a small 13"" COLOR CRT TV. I remember that the game was broadcast on KTLA5. But what I remember the most is the voice of Vin Scully. I don't know what who the Dodgers were playing, but I remember how much my grandmother LOVED to listen to Vin call the game. And it stuck with me. I was probably about 7 or 8 and I thought baseball was ""boring"". To be fair, I thought most sports were boring, but especially baseball. Nothing ever happens! But, I loved my grandmother, and I loved hanging out with her 3 and so I watched the game with her. Years later I discovered that yes, I did like baseball, and no, it was not boring. And since my grandmother was a Dodgers fan, then I would be too. It was something that connected us. it didn't matter where I lived, or how old I was, we both loved baseball. We both loved the Dodgers. We both loved to hear Vin call the game. My grandmother died in 2007, but something that helped to connect me to her in the years since was watching the Dodgers. Listening to Vin. As Vin got older, he still called the home games, but he handed most of the road games to a new crew. I still loved to Watch Dodgers games, but I loved watching the games he called a _little_ bit more. At the start of each season I always kind of wondered, ""is this the last year for Vin?"". And in 2016 the answer was yes. I still remember the last game [he called in Dodgers Stadium](https://www.espn.com/mlb/game/_/gameId/360925119). I remember the back and forth. I remember the Rockies going up 1 run in the top of the 9th. And the Dodgers tying it back up in the bottom of the 9th. And I remember when [Charlie Culberson hit the game winning home run in the bottom of the 10th](https://youtu.be/HayOXW09kl8). I remember the last game [Vin called in San Francisco](https://www.ryancheley.com/2016/10/03/vins-last-game/). I remember the Dodgers lost ... but it was Vin's last game, so I still loved getting the chance to watch it. And to listen to him call the game. Vin passed at the age of 94 on Aug 2, 2022. Just as I knew that there would be a day when Vin retired from calling games, I knew there would be a day when he wouldn't be with us anymore. I've been trying process this and figure out _why_ this is hitting me as hard as it is. It all comes back to my grandmother. They never met each other (at least I don't think they did), but in my head they were inextricably connected. Vin was a connection to my grandmother that I didn't fully realize I had, and with his passing that connection isn't there anymore. He hasn't called a game in more than 5 years, but still, knowing that he NEVER will again is hitting a bit hard for me. And I think it's because it reminds me that my grandma isn't here to watch the games with me anymore, and that bums me out. She was a cool lady who always loved the Dodgers ... and Vin. # WinForVin 1. Yes that 29 Palms, right next to the [LARGEST Marine Corp Base in the WORLD](https://en.wikipedia.org/wiki/Marine_Corps_Air_Ground_Combat_Center_Twentynine_Palms) ↩︎ 2. also the 29 Palms that is right next to [Joshua Tree](https://en.wikipedia.org/wiki/Joshua_Tree,_California) home to the [National Park](https://en.wikipedia.org/wiki/Joshua_Tree_National_Park) that is the current catnip of Hipsters ↩︎ 3. she always had the [butter scotch hard candies](https://www.candynation.com/butterscotch-candy-buttons) that were my favorite ↩︎ ",2022-08-05,a-goodbye-to-vin,"One of the earliest memories of my grandmother is visiting her in 29 Palms 1 2 in her permanent mobile home. I remember sitting on the davenport watching the Dodgers on a small 13"" COLOR CRT TV. I remember that the game was broadcast on KTLA5. But what I remember … ",A Goodbye to Vin,https://www.ryancheley.com/2022/08/05/a-goodbye-to-vin/ ryan,musings,"This is mostly for me to write down my notes and thoughts about the book “How to Win Friends and Influence People.” I’ve noted below the summary from the end of each section below (so I don’t forget what they were). The first three sections seemed to speak to my modern sensibilities the most (keep in mind this book was published in 1936 ... the version I read was revised in 1981). I have the summaries below, for reference, but I wanted to have my own take on each. ## Fundamental Techniques in Handling People This seems to be a long way of saying the “Use the **Golden Rule** ” over and over again. The three points are: 1. Don’t criticize, condemn or complain 2. Give honest and sincere appreciation 3. Arouse in the other person an eager want ## Six ways to make people like you The ‘rules’ presented here are also useful for making small talk at parties (or other gatherings). I find that talking about myself with a total stranger is about the hardest thing I can do. I try to engage with people at parties and have what I hope are interesting questions to ask should I need to. Stuff I tend to avoid: * What do you do for a living? * Where do you work? * Sports * Politics Stuff I try to focus on: * How do you know the host / acquaintance we may have in common * What’s the most interesting problem you’ve solved or are working to solve in the last week * Have you been on a vacation recently? What was your favorite part about it? (With this one I don’t let people off the hook with, ‘being away from work’ ... I try to find something that they really found enjoyable and interesting These talking points are usually a pretty good starting point. Sometimes when I’m introduced to a person and the person introduces them as their job, i.e. This is Sally Jones, she’s a Doctor at the local Hospital, I’ll use that to parlay away from something work focused (what kind of doctor are you) to something more person focused, why did you want to become a doctor? Where did you go to Medical School? Did you know you always wanted to be a doctor? I try to focus on getting to know them better and have them talk about themselves. The tips from the book support my intuition when meeting new people. They are: 1. Become genuinely interested in other people 2. Smile 3. Remember that a person’s name is to that person the sweetest and most important sound in any language 4. Be a good listener. Encourage to talk about themselves 5. Talk in terms of the other person’s interest 6. Make the other person feel important - and do it sincerely ## How to Win People to your way of thinking This section provided the most useful and helpful information (for me anyway!). It really leads to how to have better influence (than winning friends). One of the problems I’ve suffered from throughout my life is the **need** to be right about a thing. This section has concrete tips and examples of how to not be the smartest person in the room, but working on being the most influential person in the room. My favorite is the first one, which I’ll paraphrase to be “The only way to win an argument is to avoid it!” I’d never thought about trying to avoid arguments, only how to win them once I was in them. The idea reminds me a bit of [War Games](https://en.m.wikipedia.org/wiki/WarGames ""War Game with Matthew Broderick \(1984\)""). At the end, Joshua, the super computer that is trying to figure out how to win a Nuclear War with the USSR, concedes that the only way to win is to not play at all. Just like an argument. The other piece that really struck me was get the other person to say ‘Yes’. This is kind of sales-y and could be smarmy if used with a subtext of insincerity, but I think that the examples given in the book, and using it in the context of trying to win friends AND influence people it can go a long way. The tips from this section of the book are: 1. The only way to get the best of an argument is to avoid it 2. Show respect for the other person’s opinions. Never say “You’re wrong” 3. If you are wrong, admit it quickly and emphatically 4. Begin in a friendly way 5. Get the other person saying “yes, yes” immediately 6. Let the other person do a great deal of the talking 7. Let the other person feel that the idea is his or hers 8. Try honestly to see things from the other persons perspective 9. BE sympathetic with the other persons ideas and desires 10. Appeal to the nobler motives 11. Dramatize your ideas 12. throw down a challenge ## Be a Leader: How to change people without giving offense or arousing resentment This section has the best points, but the stories were _very_ contrived. Again, this goes to how to win influence more than winning friends. Some of the items are a bit too 1930s for my taste (numbers 2, 3, and 6 in particular seem overly outdated). But overall, they are good ideas to work towards. The tips are: 1. Begin with praise and honest appreciation 2. Call attention to the person’s mistake indirectly 3. Talk about your own mistakes before criticizing the other person 4. Ask questions instead of giving direct orders 5. Let the other person save face 6. Praise the slightest improvement and praise every improvement. Be “hearty in your approbation and lavish in your praise” 7. Give the other person a fine reputation to live up to 8. Use encouragement. make the fault seem easy to correct 9. Make the other person gabby about doing the thing you suggest Overall I’m really glad that I read this book and glad that my [CHIME](https://chimecentral.org) mentor [Tim Gibbs](https://www.linkedin.com/in/srtim/) recommended it to me. I’ve been actively working to include these ideas into my work and home life and have found some surprising benefits. It’s also helping to make me a little less stressed out. If you’re looking for a bit of help in trying to be a better influencer in your organization, or your personal life, [this book](https://www.amazon.com/How-Win-Friends-Influence- People/dp/1439167346/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1527122851&sr=8-1 ""How to Win Friends and Influence People"") is well worth the read. ",2018-05-23,a-summary-of-dale-carnegies-how-to-win-friends-and-influence-people,"This is mostly for me to write down my notes and thoughts about the book “How to Win Friends and Influence People.” I’ve noted below the summary from the end of each section below (so I don’t forget what they were). The first three sections seemed to speak … ",A Summary of Dale Carnegie’s “How to Win Friends and Influence People”,https://www.ryancheley.com/2018/05/23/a-summary-of-dale-carnegies-how-to-win-friends-and-influence-people/ Ryan Cheley,pages,"I'm Ryan Cheley and this is my site. I've got various places on the internet you can find me, like [GitHub](https://github.com/ryancheley), [Mastodon](https://mastodon.social/@ryancheley), and [here](/)! I like writing [Python](https://www.python.org), and when developing web stuff, I like to use [Django](https://www.djangoproject.com). A couple of Django projects I've done can be found [here](https://stadiatracker.com/Pages/home) and [here](https://doestatisjrhaveanerrortoday.com). The source code for [DoesTatisJrHaveAnErrorToday.com](https://doestatisjrhaveanerrortoday.com) can be found [here](https://github.com/ryancheley/tatis). If you're really interested, you can find my CV [here](/cv/). ",2025-04-02,about,"I'm Ryan Cheley and this is my site. I've got various places on the internet you can find me, like [GitHub](https://github.com/ryancheley), [Mastodon](https://mastodon.social/@ryancheley), and [here](/)! I like writing [Python](https://www.python.org), and when developing web stuff, I like to use [Django](https://www.djangoproject.com). A couple of Django projects I've done can be found [here](https://stadiatracker.com/Pages/home) and … ",About,https://www.ryancheley.com/pages/about/ ryan,technology,"Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was a bad idea. (Duh) I spent the better part of an hour trying to find a script that I had accidentally deleted somewhere in my blog. Turns out it was (mostly) there, but it didn’t ‘feel’ right … though I’m not sure why. I was able to restore the file from my blog archive, but I decided that was a dumb way to live and given that 1. I use version control at work (and have for the last 15 years) 2. I’ve used it for other personal projects However, I’ve only ever used a GUI version of either subversion (at work) or GitHub (for personal projects via PyCharm). I’ve never used it from the command line. And so, with a bit of time on my hands I dove in to see what needed to be done. Turns out, not much. I used this [GitHub](https://help.github.com/articles/adding-an-existing-project-to- github-using-the-command-line/) resource to get me what I needed. Only a couple of commands and I was in business. The problem is that I have a terrible memory and this isn’t something I’m going to do very often. So, I decided to write a bash script to encapsulate all of the commands and help me out a bit. The script looks like this: echo ""Enter your commit message:"" read commit_msg git commit -m ""$commit_msg"" git remote add origin path/to/repository git remote -v git push -u origin master git add $1 echo ”enter your commit message:” read commit_msg git commit -m ”$commit_msg” git push I just recently learned about user input in bash scripts and was really excited about the opportunity to be able to use it. Turns out it didn’t take long to try it out! (God I love learning things!) What the script does is commits the files that have been changed (all of them), adds it to the origin on the GitHub repo that has been specified, prints verbose logging to the screen (so I can tell what I’ve messed up if it happens) and then pushes the changes to the master. This script doesn’t allow you to specify what files to commit, nor does it allow for branching and tagging … but I don’t need those (yet). I added this script to 3 of my projects, each of which can be found in the following GitHub Repos: * [rpicamera-hummingbird](https://github.com/ryancheley/rpicamera-hummingbird) * [rpi-dodgers](https://github.com/ryancheley/rpi-dodgers) * [rpi-kings](https://github.com/ryancheley/rpi-kings) I had to make the commit.sh executable (with `chmod +x commit.sh`) but other than that it’s basically plug and play. ## Addendum I made a change to my Kings script tonight (Nov 27) and it wouldn’t get pushed to git. After a bit of Googling and playing around, I determined that the original script would only push changes to an empty repo ... not one with stuff, like I had. Changes made to the post (and the GitHub repo!) ",2018-11-25,adding-my-raspberry-pi-project-code-to-github,"Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was … ",Adding my Raspberry Pi Project code to GitHub,https://www.ryancheley.com/2018/11/25/adding-my-raspberry-pi-project-code-to-github/ ryan,technology,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin called [tipue-search](https://github.com/pelican- plugins/tipue-search) but when I was looking at it I could tell it was on it's last legs. I thought about it, and since my blag isn't super high trafficked AND you can use google to search a specific site, I could wait a bit and see what options came up. After waiting a few months, I decided it would be interesting to see if I could write a SQLite utility to get the data from my blog, add it to a SQLite database and then use [datasette](https://datasette.io) to serve it up. I wrote the beginning scaffolding for it last August in a utility called [pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/0.1/), but I ran into several technical issues I just couldn't overcome. I thought about giving up, but sometimes you just need to take a step away from a thing, right? After the first of the year I decided to revisit my idea, but first looked to see if there was anything new for Pelican search. I found a tool plugin called [search](https://github.com/pelican-plugins/search) that was released last November and is actively being developed, but as I read through the documentation there was just **A LOT** of stuff: * stork * requirements for the structure of your page html * static asset hosting * deployment requires updating your `nginx` settings These all looked a bit scary to me, and since I've done some work using [datasette](https://datasette.io) I thought I'd revisit my initial idea. ## My First Attempt As I mentioned above, I wrote the beginning scaffolding late last summer. In my first attempt I tried to use a few tools to read the `md` files and parse their `yaml` structure and it just didn't work out. I also realized that `Pelican` can have [reStructured Text](https://www.sphinx- doc.org/en/master/usage/restructuredtext/basics.html) and that any attempt to parse just the `md` file would never work for those file types. ## My Second Attempt ### The Plugin During the holiday I thought a bit about approaching the problem from a different perspective. My initial idea was to try and write a `datasette` style package to read the data from `pelican`. I decided instead to see if I could write a `pelican` plugin to get the data and then add it to a SQLite database. It turns out, I can, and it's not that hard. Pelican uses `signals` to make plugin in creation a pretty easy thing. I read a [post](https://blog.geographer.fr/pelican-plugins) and the [documentation](https://docs.getpelican.com/en/latest/plugins.html) and was able to start my effort to refactor `pelican-to-sqlite`. From [The missing Pelican plugins guide](https://blog.geographer.fr/pelican- plugins) I saw lots of different options, but realized that the signal `article_generator_write_article` is what I needed to get the article content that I needed. I then also used `sqlite_utils` to insert the data into a database table. def save_items(record: dict, table: str, db: sqlite_utils.Database) -> None: # pragma: no cover db[table].insert(record, pk=""slug"", alter=True, replace=True) Below is the method I wrote to take the content and turn it into a dictionary which can be used in the `save_items` method above. def create_record(content) -> dict: record = {} author = content.author.name category = content.category.name post_content = html2text.html2text(content.content) published_date = content.date.strftime(""%Y-%m-%d"") slug = content.slug summary = html2text.html2text(content.summary) title = content.title url = ""https://www.ryancheley.com/"" + content.url status = content.status if status == ""published"": record = { ""author"": author, ""category"": category, ""content"": post_content, ""published_date"": published_date, ""slug"": slug, ""summary"": summary, ""title"": title, ""url"": url, } return record Putting these together I get a method used by the Pelican Plugin system that will generate the data I need for the site AND insert it into a SQLite database def run(_, content): record = create_record(content) save_items(record, ""content"", db) def register(): signals.article_generator_write_article.connect(run) ### The html template update I use a custom implementation of [Smashing Magazine](https://www.smashingmagazine.com/2009/08/designing-a-html-5-layout- from-scratch/). This allows me to do some edits, though I mostly keep it pretty stock. However, this allowed me to make a small edit to the `base.html` template to include a search form. In order to add the search form I added the following code to `base.html` below the `nav` tag:
### Putting it all together with datasette and Vercel Here's where the **magic** starts. Publishing data to Vercel with `datasette` is extremely easy with the `datasette` plugin [`datasette-publish- vercel`](https://pypi.org/project/datasette-publish-vercel/). You do need to have the [Vercel cli installed](https://vercel.com/cli), but once you do, the steps for publishing your SQLite database is really well explained in the `datasette-publish-vercel` [documentation](https://github.com/simonw/datasette-publish- vercel/blob/main/README.md). One final step to do was to add a `MAKE` command so I could just type a quick command which would create my content, generate the SQLite database AND publish the SQLite database to Vercel. I added the below to my `Makefile`: vercel: { \ echo ""Generate content and database""; \ make html; \ echo ""Content generation complete""; \ echo ""Publish data to vercel""; \ datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \ echo ""Publishing complete""; \ } The line datasette publish vercel pelican.db --project=search-ryancheley --metadata metadata.json; \ has an extra flag passed to it (`--metadata`) which allows me to use `metadata.json` to create a saved query which I call `article_search`. The contents of that saved query are: select summary as 'Summary', url as 'URL', published_date as 'Published Data' from content where content like '%' || :text || '%' order by published_date This is what allows the `action` in the `form` above to have a URL to link to in `datasette` and return data! With just a few tweaks I'm able to include a search tool, powered by datasette for my pelican blog. Needless to say, I'm pretty pumped. ## Next Steps There are still a few things to do: 1. separate search form html file (for my site) 2. formatting `datasette` to match site (for my vercel powered instance of `datasette`) 3. update the README for `pelican-to-sqlite` package to better explain how to fully implement 4. Get `pelican-to-sqlite` added to the [pelican-plugins page](https://github.com/pelican-plugins/) ",2022-01-16,adding-search-to-my-pelican-blog-with-datasette,"Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin … ",Adding Search to My Pelican Blog with Datasette,https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/ ryan,microblog,"The AHL All Star Challenge was tonight and it was some of the most fun I've had at Acrisure since it opened in late 2022. Most All Star style competitions are pretty unserious, and can be, in my opinion, kind of boring as well. I mean, I LOVE baseball, but watching the All Star game is not for me. And don't get me started on the Home Run Derby. Snooze fest for me. The AHL All Star completion though was something else! A Skills day yesterday, but then the actual challenge today. Representatives from each division play in a 3-on-3 style, in 2 5-minute periods. If tied at the end, the tie is broken with a shootout. The top two teams with the most wins face each other in the Championship game. The Championship game is a little different in that it's a 6 minute single period game. Again, if there is a tie at the end you have a shootout. This means that you get to watch 7 'mini' games in about 2 1/2 hours. It's pretty intense. The Firebirds were the host team this year, but we only had one All Star, [Cale Fleury](https://theahl.com/stats/player/7382/86/cale-fleury). He was called up to the Kraken, so a replacement, [Jani Nyman](https://theahl.com/stats/player/10127/86/jani-nyman) was made. Even though the Firebirds have a really good record (24-15-1-5), they only had 1 player on the All Star Game because the Pacific Division has 10 teams (read my thoughts on that [here](https://www.ryancheley.com/2024/02/24/realign-the- ahl/)). Anyway, the competition was pretty amazing tonight, and I'm really glad I got to go. I'm kind of hoping to be able to go next year when it's in [Rockford](https://icehogs.com/news/rockford-icehogs-to-host-2026-ahl-all- star-classic). ",2025-02-03,ahl-all-star-challenge,"The AHL All Star Challenge was tonight and it was some of the most fun I've had at Acrisure since it opened in late 2022. Most All Star style competitions are pretty unserious, and can be, in my opinion, kind of boring as well. I mean, I LOVE baseball, but … ",AHL All Star Challenge,https://www.ryancheley.com/2025/02/03/ahl-all-star-challenge/ ryan,microblog,"Since the All-Star break the Firebirds entered what is arguably their softest part of their schedule with games against San Diego, Henderson, San Diego again, Bakersfield, and Tucson. These 4 teams are in the bottom of the Pacific division and in San Diego's case they are 20+ points behind the Firebirds. I'm not sure what the hell is going on, but in their first game in San Diego they won in Over time in what should have been a blow out, in their second game in Henderson they lost by 1 goal. In their first home game post All Star break they again played San Diego and lost 5-3 (the last goal being an empty netter so 🤷🏼) but they also gave up 2 goals in less than 40 seconds in the second period. That ended up really being the different. That means 3 games into their 5 game 'soft' patch and they're 1-2. They play Bakersfield tomorrow night and I sure hope they find a way to get back into their winning ways because this has been some pretty shitty hockey to watch. [The Firebirds are 2-3 against the Condors this season](https://ahl- data.ryancheley.com/games?sql=select+*%0D%0Afrom%0D%0A++games+g%0D%0Ainner+join+dim_date+d+on+g.game_date+%3D+d.date%0D%0A++where+d.season+%3D+%272024-25%27%0D%0A++and+%28%0D%0A++%28home_team%3D%27Coachella+Valley+Firebirds%27+and+away_team+%3D+%27Bakersfield+Condors%27%29%0D%0A++or+%0D%0A++away_team%3D%27Coachella+Valley+Firebirds%27+and+home_team+%3D+%27Bakersfield+Condors%27%0D%0A++%29%0D%0Aorder+by%0D%0A++g.game_id%0D%0A) and have yet to beat the Condors at home this season. To quote Han Solo, ""I have a bad feeling about this"" ",2025-02-15,all-star-break-doldrums,"Since the All-Star break the Firebirds entered what is arguably their softest part of their schedule with games against San Diego, Henderson, San Diego again, Bakersfield, and Tucson. These 4 teams are in the bottom of the Pacific division and in San Diego's case they are 20+ points behind the … ",all-star-break-doldrums,https://www.ryancheley.com/2025/02/15/all-star-break-doldrums/ ryan,musings,"About a month ago I discovered a kitschy band that did covers of current pop songs but re-imagined as Gatsbyesque versions. I was instantly in love with the new arrangements of these songs that I knew and the videos that they posted on [YouTube](https://www.youtube.com/user/ScottBradleeLovesYa). I loved it so much that I’ve been listening to them in Apple Music for a couple of weeks as well (time permitting). I mentioned to Emily this new band that I found and she told me that they would be playing at the [McCallum Theatre](http://www.mccallumtheatre.com) and I was in utter disbelief. We bought tickets that night (DD 113 and 114 ... some of the best in the house!) and we were all set. To say that I’ve been looking forward to this concert is an understatement. For all the awesomeness that the YouTube videos have, I **knew** that a live performance would be a major event and I was not disappointed. I think this is a concert that anyone could enjoy and that everyone should see. This was the first concert where I was both glad to be there AND glad that I had gone (usually I’m just glad that I have gone and have a hard time enjoying the moment while I’m there). I have the set list below, mostly so I don’t forget what songs were played. It’s also really cool because some of the performers at the concert were the ones in the YouTube videos. Miche (pronounced Mickey) Braden was an amazingly soulful singer, and her part of ‘All about that Bass’ was on point and breath taking! It was such an awesome concert. I can’t wait to see them again! ## First Set [Thriller](https://youtu.be/td-_pUPVjdo) [Sweet child o mine](https://youtu.be/kJ3BAF_15yQ) [Just Like Heaven](https://youtu.be/Fjd1seT1mMQ) [Are you going to be my girl](https://youtu.be/Cdo0lfWoqws) [Africa](https://youtu.be/IUlRavyDP6o) [Lean on](https://youtu.be/nzFJNsij38c) [All about that bass](https://youtu.be/G-N3alxKyjE) ## Second Set [Umbrella](https://youtu.be/OBmlCZTF4Xs) [Story of my life](https://youtu.be/FASi9lrUoYM) [Since you been gone](https://youtu.be/lhod-UI40C0) [Crazy - Gnarls Barkley](https://youtu.be/FyFwko9O2UE) [Heart of glass](https://youtu.be/DTMoipsvGNc) [Habits - Tove Lo](https://youtu.be/7hHZnvjCbVw) [Time after time](https://youtu.be/yKcPEtKu7CM) ## Encore [Stacy's mom](https://youtu.be/T2kOj-GFN8k) [Creep - Radiohead](https://youtu.be/m3lF2qEA2cw) [Such Great Heights](https://youtu.be/tti76BnCL98) ## Band Hannah Gill - vocals Demi Remick - Tap Miche Braden - vocals Natalie Angst - vocals Casey Abrams - MC / vocals Ryan Quinn - Vocals Ben the Sax Guy - Sax and clarinet Dave Tedeschi - drums Steve Whipple - bass Logan Evan Thomas - Piano The trombone player was amazing, but I wasn’t able to find him on the [PMJ Performers page](http://postmodernjukebox.com/performers/). ",2018-12-15,an-evening-with-post-modern-jukebox,"About a month ago I discovered a kitschy band that did covers of current pop songs but re-imagined as Gatsbyesque versions. I was instantly in love with the new arrangements of these songs that I knew and the videos that they posted on [YouTube](https://www.youtube.com/user/ScottBradleeLovesYa). I loved it so much that … ",An Evening with Post Modern Jukebox,https://www.ryancheley.com/2018/12/15/an-evening-with-post-modern-jukebox/ ryan,musings,"The thing about HIMSS is that there are a lot of people. I mean ... a lot of people. More than 43k people will attend as speakers, exhibitors or attendees. Let that sink in for a second. No. Really. Let. That. Sink. In. That’s more than the average [attendance of a MLB game](https://www.baseball- reference.com/leagues/MLB/2017-misc.shtml ""Average attendance"") of 29 teams. It’s ridiculous. As an introvert you know what will drain you and what will invigorate you. For me I need to be cautious of conferencing too hard. That is, I need to be aware of myself, my surroundings and my energy levels. My tips are: 1. Have a great playlist on your smart phone. I use an iPhone and get a subscription to Apple Music just for the conference. This allows me to have a killer set of music that helps to drown out the cacophony of people. 2. Know when you’ve reached your limit. Even with some sweet tunes it’s easy to get drained. When you’re done you’re done. Don’t be a hero. 3. Try to make at least one meaningful connection. I know, it’s hard. But it’s totally worth it. Other introverts are easy to spot because they’re the people on their smart phones pretending to write a blog post while listening to their sweet playlist. But if you can start a conversation, not small talk, it will be worth it. Attend a networking function that’s applicable to you and you’ll be able to find at least one or two people to connect with. The other tips for surviving HIMSS are the same for any other conference: 1. Don’t worry about how you’re dressed ... you will **always** be underdressed when compared to Hospital Administrators ... you’re in ‘IT’ and you dress like it 2. Wear good walking shoes (see number 2 about being under dressed) 3. Drink plenty of water 4. Wash your hands and/or have hand sanitizer 5. Accept free food when it’s offered Ok. One day down. 3+ more to go! ",2018-03-06,an-introverts-guide-to-large-conferences-or-how-i-survived-himss-2018-and-2017-and-2016,"The thing about HIMSS is that there are a lot of people. I mean ... a lot of people. More than 43k people will attend as speakers, exhibitors or attendees. Let that sink in for a second. No. Really. Let. That. Sink. In. That’s more than the average [attendance of …](https://www.baseball- reference.com/leagues/MLB/2017-misc.shtml ""Average attendance"") ",An Introvert’s guide to large conferences ... or how I survived HIMSS 2018 (and 2017 and 2016),https://www.ryancheley.com/2018/03/06/an-introverts-guide-to-large-conferences-or-how-i-survived-himss-2018-and-2017-and-2016/ ryan,technology,"Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the code on `pythonista` on my iPad Pro and verify that it would run. It took some doing (mostly because I _forgot_ that the attributes in an `img` tag included what I needed ... initially I was trying to programmatically get the name of the person from the image file itelf using [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) ... it didn't work out well). Once that was done I branched the `master` on GitHub into a `development` branch and copied the changes there. Once that was done I performed a **pull request** on the macOS GitHub Desktop Application. Finally, I used the macOS GitHub app to merge my **pull request** from `development` into `master` and now have the changes. The updated script will now also get the image data to display into the multi markdown table: | Name | Title | Image | | --- | --- | --- | |Mike Cheley|CEO/Creative Director|![alt text](https://www.graphtek.com/user_images/Team/Mike_Cheley.png ""Mike Cheley"")| |Ozzy|Official Greeter|![alt text](https://www.graphtek.com/user_images/Team/Ozzy.png ""Ozzy"")| |Jay Sant|Vice President|![alt text](https://www.graphtek.com/user_images/Team/Jay_Sant.png ""Jay Sant"")| |Shawn Isaac|Vice President|![alt text](https://www.graphtek.com/user_images/Team/Shawn_Isaac.png ""Shawn Isaac"")| |Jason Gurzi|SEM Specialist|![alt text](https://www.graphtek.com/user_images/Team/Jason_Gurzi.png ""Jason Gurzi"")| |Yvonne Valles|Director of First Impressions|![alt text](https://www.graphtek.com/user_images/Team/Yvonne_Valles.png ""Yvonne Valles"")| |Ed Lowell|Senior Designer|![alt text](https://www.graphtek.com/user_images/Team/Ed_Lowell.png ""Ed Lowell"")| |Paul Hasas|User Interface Designer|![alt text](https://www.graphtek.com/user_images/Team/Paul_Hasas.png ""Paul Hasas"")| |Alan Schmidt|Senior Web Developer|![alt text](https://www.graphtek.com/user_images/Team/Alan_Schmidt.png ""Alan Schmidt"")| Which gets displayed as this: Name Title Image * * * Mike Cheley CEO/Creative Director ![alt text](https://www.graphtek.com/user_images/Team/Mike_Cheley.png) Ozzy Official Greeter ![alt text](https://www.graphtek.com/user_images/Team/Ozzy.png) Jay Sant Vice President ![alt text](https://www.graphtek.com/user_images/Team/Jay_Sant.png) Shawn Isaac Vice President ![alt text](https://www.graphtek.com/user_images/Team/Shawn_Isaac.png) Jason Gurzi SEM Specialist ![alt text](https://www.graphtek.com/user_images/Team/Jason_Gurzi.png) Yvonne Valles Director of First Impressions ![alt text](https://www.graphtek.com/user_images/Team/Yvonne_Valles.png) Ed Lowell Senior Designer ![alt text](https://www.graphtek.com/user_images/Team/Ed_Lowell.png) Paul Hasas User Interface Designer ![alt text](https://www.graphtek.com/user_images/Team/Paul_Hasas.png) Alan Schmidt Senior Web Developer ![alt text](https://www.graphtek.com/user_images/Team/Alan_Schmidt.png) ",2016-10-22,an-update-to-my-first-python-script,"Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the … ",An Update to my first Python Script,https://www.ryancheley.com/2016/10/22/an-update-to-my-first-python-script/ ryan,productivity,"In my first post of this series I outlined the steps needed in order for me to post. They are: 1. Run `make html` to generate the SQLite database that powers my site's search tool1 2. Run `make vercel` to deploy the SQLite database to vercel 3. [Run `git add ` to add post to be committed to GitHub](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/) 4. [Run `git commit -m ` to commit to GitHub](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-message) 5. [Post to Twitter with a link to my new post](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/) In this post I'll be focusing on how I automated step 4, Run `git commit -m ` to commit to GitHub. # Automating the ""git commit ..."" part of my workflow In order for my GitHub Action to auto post to Twitter, my commit message needs to be in the form of ""New Post: ..."". What I'm looking for is to be able to have the commit message be something like this: > New Post: Great New Post https://ryancheley.com/yyyy/mm/dd/great-new-post/ This is basically just three parts from the markdown file, the `Title`, the `Date`, and the `Slug`. In order to get those details, I need to review the structure of the markdown file. For Pelican writing in markdown my file is structured like this: Title: Date: Tags: Slug: Series: Authors: Status: My words start here and go on for a bit. In [the last post](https://www.ryancheley.com/2022/01/28/auto-generating-the- commit-message) I wrote about how to `git add` the files in the content directory. Here, I want to take the file that was added to `git` and get the first 7 rows, i.e. the details from `Title` to `Status`. The file that was updated that needs to be added to git can be identified by running find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g' | xargs git add Running `git status` now will display which file was added with the last command and you'll see something like this: ❯ git status On branch main Untracked files: (use ""git add ..."" to include in what will be committed) content/productivity/auto-generating-the-commit-message.md What I need though is a more easily parsable output. Enter the `porcelin` flag which, per the docs > Give the output in an easy-to-parse format for scripts. This is similar to > the short output, but will remain stable across Git versions and regardless > of user configuration. See below for details. which is exactly what I needed. Running `git status --porcelain` you get this: ❯ git status --porcelain ?? content/productivity/more-writing-automation.md Now, I just need to get the file path and exclude the status (the `??` above in this case2), which I can by piping in the results and using `sed` ❯ git status --porcelain | sed s/^...// content/productivity/more-writing-automation.md The `sed` portion says * search the output string starting at the beginning of the line (`^`) * find the first three characters (`...`). 3 * replace them with nothing (`//`) There are a couple of lines here that I need to get the content of for my commit message: * Title * Slug * Date * Status4 I can use `head` to get the first `n` lines of a file. In this case, I need the first 7 lines of the output from `git status --porcelain | sed s/^...//`. To do that, I pipe it to `head`! git status --porcelain | sed s/^...// | xargs head -7 That command will return this: Title: Auto Generating the Commit Message Date: 2022-01-24 Tags: Automation Slug: auto-generating-the-commit-message Series: Auto Deploying my Words Authors: ryan Status: draft In order to get the **Title** , I'll pipe this output to `grep` to find the line with `Title` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' which will return this Title: Auto Generating the Commit Message Now I just need to remove the leading `Title:` and I've got the title I'm going to need for my Commit message! git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g' which return just Auto Generating the Commit Message I do this for each of the parts I need: * Title * Slug * Date * Status Now, this is getting to have a lot of parts, so I'm going to throw it into a `bash` script file called `tweet.sh`. The contents of the file look like this: TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'` SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'` POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'` POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'` You'll see above that the `Date` piece is a little more complicated, but it's just doing a find and replace on the `-` to update them to `/` for the URL. Now that I've got all of the pieces I need, it's time to start putting them together I define a new variable called `URL` and set it URL=""https://ryancheley.com/$POST_DATE/$SLUG/"" and the commit message MESSAGE=""New Post: $TITLE $URL"" Now, all I need to do is wrap this in an `if` statement so the command only runs when the STATUS is `published` if [ $POST_STATUS = ""published"" ] then MESSAGE=""New Post: $TITLE $URL"" git commit -m ""$MESSAGE"" git push github main fi Putting this all together (including the `git add` from my previous post) and the `tweet.sh` file looks like this: # Add the post to git find content -name '*.md' -print | sed 's/^/""/g' | sed 's/$/""/g' | xargs git add # Get the parts needed for the commit message TITLE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Title: ' | sed -e 's/Title: //g'` SLUG=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Slug: ' | sed -e 's/Slug: //g'` POST_DATE=`git status --porcelain | sed s/^...// | xargs head -7 | grep 'Date: ' | sed -e 's/Date: //g' | head -c 10 | grep '-' | sed -e 's/-/\//g'` POST_STATUS=` git status --porcelain | sed s/^...// | xargs head -7 | grep 'Status: ' | sed -e 's/Status: //g'` URL=""https://ryancheley.com/$POST_DATE/$SLUG/"" if [ $POST_STATUS = ""published"" ] then MESSAGE=""New Post: $TITLE $URL"" git commit -m ""$MESSAGE"" git push github main fi When this script is run it will find an updated or added markdown file (i.e. article) and add it to git. It will then parse the file to get data about the article. If the article is set to published it will commit the file with a message and will push to github. Once at GitHub, [the Tweeting action I wrote about](https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/) will tweet my commit message! In the next (and last) article, I'm going to throw it all together and to get a spot when I can run one make command that will do all of this for me. ## Caveats The script above works, but if you have multiple articles that you're working on at the same time, it will fail pretty spectacularly. The final version of the script has guards against that and looks like [this](https://github.com/ryancheley/ryancheley.com/blob/main/tweet.sh) 1. `make vercel` actually runs `make html` so this isn't really a step that I need to do. ↩︎ 2. Other values could just as easily be `M` or `A` ↩︎ 3. Why the first three characters, because that's how `porcelain` outputs the `status` ↩︎ 4. I will also need the `Status` to do some conditional logic otherwise I may have a post that is in draft status that I want to commit and the GitHub Action will run posting a tweet with an article and URL that don't actually exist yet. ↩︎ ",2022-01-28,auto-generating-the-commit-message,"In my first post of this series I outlined the steps needed in order for me to post. They are: 1. Run `make html` to generate the SQLite database that powers my site's search tool1 2. Run `make vercel` to deploy the SQLite database to vercel 3. [Run `git add ` to …](https://www.ryancheley.com/2022/01/26/git-add-filename-automation/) ",Auto Generating the Commit Message,https://www.ryancheley.com/2022/01/28/auto-generating-the-commit-message/ ryan,productivity,"Each time I write something for this site there are several steps that I go through to make sure that the post makes it's way to where people can see it. 1. Run `make html` to generate the SQLite database that powers my site's search tool1 2. Run `make vercel` to deploy the SQLite database to vercel 3. Run `git add ` to add post to be committed to GitHub 4. Run `git commit -m ` to commit to GitHub 5. Post to Twitter with a link to my new post If there's more than 2 things to do, I'm totally going to forget to do one of them. The above steps are all automat-able, but the one I wanted to tackle first was the automated tweet. Last night I figured out how to tweet with a GitHub action. There were a few things to do to get the auto tweet to work: 1. Find a GitHub in the Market Place that did the auto tweet (or try to write one if I couldn't find one) 2. Set up a twitter app with Read and Write privileges 3. Set the necessary secrets for the report (API Key, API Key Secret, Access Token, Access Token Secret, Bearer) 4. Test the GitHub Action The action I chose was [send-tweet-action](https://github.com/ethomson/send- tweet-action). It's got easy to read [documentation](https://github.com/ethomson/send-tweet- action/blob/main/README.md) on what is needed. Honestly the hardest part was getting a twitter app set up with Read and Write privileges. I'm still not sure how to do it, honestly. I was lucky enough that I already had an app sitting around with Read and Write from the WordPress blog I had previously, so I just regenerated the keys for that one and used them. The last bit was just testing the action and seeing that it worked as expected. It was pretty cool running an action and then seeing a tweet in my timeline. The TIL for this was that GitHub Actions can have conditionals. This is important because I don't want to generate a new tweet each time I commit to main. I only want that to happen when I have a new post. To do that, you just need this in the GitHub Action: if: ""contains(github.event.head_commit.message, '')"" In my case, the `` is `New Post:`. The `send-tweet-action` has a `status` field which is the text tweeted. I can use the `github.event.head_commit.message` in the action like this: ${{ github.event.head_commit.message }} Now when I have a commit message that starts 'New Post:' against `main` I'll have a tweet get sent out too! This got me to thinking that I can/should automate all of these steps. With that in mind, I'm going to work on getting the process down to just having to run a single command. Something like: make publish ""New Post: Title of my Post https://www.ryancheley.com/yyyy/mm/dd/slug/"" 1. `make vercel` actually runs `make html` so this isn't really a step that I need to do. ↩︎ ",2022-01-24,auto-tweeting-new-post,"Each time I write something for this site there are several steps that I go through to make sure that the post makes it's way to where people can see it. 1. Run `make html` to generate the SQLite database that powers my site's search tool1 2. Run `make vercel` to … ",Auto Tweeting New Post,https://www.ryancheley.com/2022/01/24/auto-tweeting-new-post/ ryan,technology,"We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server (at some point I’ll write something up about about multiple Django Sites on the same server and part of this will still apply then). How can you do it? Well you’ll want to write yourself some scripts! I have a mix of Python and Shell scripts set up to do this. They are a bit piece meal, but they also allow me to run specific parts of the process without having to try and execute a script with ‘commented’ out pieces. **Python Scripts** create_server.py destroy_droplet.py **Shell Scripts** copy_for_deploy.sh create_db.sh create_server.sh deploy.sh deploy_env_variables.sh install-code.sh setup-server.sh setup_nginx.sh setup_ssl.sh super.sh upload-code.sh The Python script `create_server.py` looks like this: # create_server.py import requests import os from collections import namedtuple from operator import attrgetter from time import sleep Server = namedtuple('Server', 'created ip_address name') doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN'] # Create Droplet headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {doat}', } data = print('>>> Creating Server') requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data) print('>>> Server Created') print('>>> Waiting for Server Stand up') sleep(90) print('>>> Getting Droplet Data') params = ( ('page', '1'), ('per_page', '10'), ) get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params) server_list = [] for d in get_droplets.json()['droplets']: server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name'])) server_list = sorted(server_list, key=attrgetter('created'), reverse=True) server_ip_address = server_list[0].ip_address db_name = os.environ['DJANGO_PG_DB_NAME'] db_username = os.environ['DJANGO_PG_USER_NAME'] if server_ip_address != : print('>>> Run server setup') os.system(f'./setup-server.sh {server_ip_address} {db_name} {db_username}') print(f'>>> Server setup complete. You need to add {server_ip_address} to the ALLOWED_HOSTS section of your settings.py file ') else: print('WARNING: Running Server set up will destroy your current production server. Aborting process') Earlier I said that I liked Digital Ocean because of it’s nice API for interacting with it’s servers (i.e. Droplets). Here we start to see some. The First part of the script uses my Digital Ocean Token and some input parameters to create a Droplet via the Command Line. The `sleep(90)` allows the process to complete before I try and get the IP address. Ninety seconds is a bit longer than is needed, but I figure, better safe than sorry … I’m sure that there’s a way to call to DO and ask if the just created droplet has an IP address, but I haven’t figured it out yet. After we create the droplet AND is has an IP address, we get it to pass to the bash script `server-setup.sh`. # server-setup.sh #!/bin/bash # Create the server on Digital Ocean export SERVER=$1 # Take secret key as 2nd argument if [[ -z ""$1"" ]] then echo ""ERROR: No value set for server ip address1"" exit 1 fi echo -e ""\n>>> Setting up $SERVER"" ssh root@$SERVER /bin/bash << EOF set -e echo -e ""\n>>> Updating apt sources"" apt-get -qq update echo -e ""\n>>> Upgrading apt packages"" apt-get -qq upgrade echo -e ""\n>>> Installing apt packages"" apt-get -qq install python3 python3-pip python3-venv tree supervisor postgresql postgresql-contrib nginx echo -e ""\n>>> Create User to Run Web App"" if getent passwd burningfiddle then echo "">>> User already present"" else adduser --disabled-password --gecos """" burningfiddle echo -e ""\n>>> Add newly created user to www-data"" adduser burningfiddle www-data fi echo -e ""\n>>> Make directory for code to be deployed to"" if [[ ! -d ""/home/burningfiddle/BurningFiddle"" ]] then mkdir /home/burningfiddle/BurningFiddle else echo "">>> Skipping Deploy Folder creation - already present"" fi echo -e ""\n>>> Create VirtualEnv in this directory"" if [[ ! -d ""/home/burningfiddle/venv"" ]] then python3 -m venv /home/burningfiddle/venv else echo "">>> Skipping virtualenv creation - already present"" fi # I don't think i need this anymore echo "">>> Start and Enable gunicorn"" systemctl start gunicorn.socket systemctl enable gunicorn.socket EOF ./setup_nginx.sh $SERVER ./deploy_env_variables.sh $SERVER ./deploy.sh $SERVER All of that stuff we did before, logging into the server and running commands, we’re now doing via a script. What the above does is attempt to keep the server in an idempotent state (that is to say you can run it as many times as you want and you don’t get weird artifacts … if you’re a math nerd you may have heard idempotent in Linear Algebra to describe the multiplication of a matrix by itself and returning the original matrix … same idea here!) The one thing that is new here is the part ssh root@$SERVER /bin/bash << EOF ... EOF A block like that says, “take everything in between `EOF` and run it on the server I just ssh’d into using bash. At the end we run 3 shell scripts: * `setup_nginx.sh` * `deploy_env_variables.sh` * `deploy.sh` Let’s review these scripts The script `setup_nginx.sh` copies several files needed for the `nginx` service: * `gunicorn.service` * `gunicorn.sockets` * `nginx.conf` It then sets up a link between the `available-sites` and `enabled-sites` for `nginx` and finally restarts `nginx` # setup_nginx.sh export SERVER=$1 export sitename=burningfiddle scp -r ../config/gunicorn.service root@$SERVER:/etc/systemd/system/ scp -r ../config/gunicorn.socket root@$SERVER:/etc/systemd/system/ scp -r ../config/nginx.conf root@$SERVER:/etc/nginx/sites-available/$sitename ssh root@$SERVER /bin/bash << EOF echo -e "">>> Set up site to be linked in Nginx"" ln -s /etc/nginx/sites-available/$sitename /etc/nginx/sites-enabled echo -e "">>> Restart Nginx"" systemctl restart nginx echo -e "">>> Allow Nginx Full access"" ufw allow 'Nginx Full' EOF The script `deploy_env_variables.sh` copies environment variables. There are packages (and other methods) that help to manage environment variables better than this, and that is one of the enhancements I’ll be looking at. This script captures the values of various environment variables (one at a time) and then passes them through to the server. It then checks to see if these environment variables exist on the server and will place them in the `/etc/environment` file export SERVER=$1 DJANGO_SECRET_KEY=printenv | grep DJANGO_SECRET_KEY DJANGO_PG_PASSWORD=printenv | grep DJANGO_PG_PASSWORD DJANGO_PG_USER_NAME=printenv | grep DJANGO_PG_USER_NAME DJANGO_PG_DB_NAME=printenv | grep DJANGO_PG_DB_NAME DJANGO_SUPERUSER_PASSWORD=printenv | grep DJANGO_SUPERUSER_PASSWORD DJANGO_DEBUG=False ssh root@$SERVER /bin/bash << EOF if [[ ""\$DJANGO_SECRET_KEY"" != ""$DJANGO_SECRET_KEY"" ]] then echo ""DJANGO_SECRET_KEY=$DJANGO_SECRET_KEY"" >> /etc/environment else echo "">>> Skipping DJANGO_SECRET_KEY - already present"" fi if [[ ""\$DJANGO_PG_PASSWORD"" != ""$DJANGO_PG_PASSWORD"" ]] then echo ""DJANGO_PG_PASSWORD=$DJANGO_PG_PASSWORD"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_PASSWORD - already present"" fi if [[ ""\$DJANGO_PG_USER_NAME"" != ""$DJANGO_PG_USER_NAME"" ]] then echo ""DJANGO_PG_USER_NAME=$DJANGO_PG_USER_NAME"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_USER_NAME - already present"" fi if [[ ""\$DJANGO_PG_DB_NAME"" != ""$DJANGO_PG_DB_NAME"" ]] then echo ""DJANGO_PG_DB_NAME=$DJANGO_PG_DB_NAME"" >> /etc/environment else echo "">>> Skipping DJANGO_PG_DB_NAME - already present"" fi if [[ ""\$DJANGO_DEBUG"" != ""$DJANGO_DEBUG"" ]] then echo ""DJANGO_DEBUG=$DJANGO_DEBUG"" >> /etc/environment else echo "">>> Skipping DJANGO_DEBUG - already present"" fi EOF The `deploy.sh` calls two scripts itself: # deploy.sh #!/bin/bash set -e # Deploy Django project. export SERVER=$1 #./scripts/backup-database.sh ./upload-code.sh ./install-code.sh The final two scripts! The `upload-code.sh` script uploads the files to the `deploy` folder of the server while the `install-code.sh` script move all of the files to where then need to be on the server and restart any services. # upload-code.sh #!/bin/bash set -e echo -e ""\n>>> Copying Django project files to server."" if [[ -z ""$SERVER"" ]] then echo ""ERROR: No value set for SERVER."" exit 1 fi echo -e ""\n>>> Preparing scripts locally."" rm -rf ../../deploy/* rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' ../../BurningFiddle/* ../../deploy echo -e ""\n>>> Copying files to the server."" ssh root@$SERVER ""rm -rf /root/deploy/"" scp -r ../../deploy root@$SERVER:/root/ echo -e ""\n>>> Finished copying Django project files to server."" And finally, # install-code.sh #!/bin/bash # Install Django app on server. set -e echo -e ""\n>>> Installing Django project on server."" if [[ -z ""$SERVER"" ]] then echo ""ERROR: No value set for SERVER."" exit 1 fi echo $SERVER ssh root@$SERVER /bin/bash << EOF set -e echo -e ""\n>>> Activate the Virtual Environment"" source /home/burningfiddle/venv/bin/activate cd /home/burningfiddle/ echo -e ""\n>>> Deleting old files"" rm -rf /home/burningfiddle/BurningFiddle echo -e ""\n>>> Copying new files"" cp -r /root/deploy/ /home/burningfiddle/BurningFiddle echo -e ""\n>>> Installing Python packages"" pip install -r /home/burningfiddle/BurningFiddle/requirements.txt echo -e ""\n>>> Running Django migrations"" python /home/burningfiddle/BurningFiddle/manage.py migrate echo -e ""\n>>> Creating Superuser"" python /home/burningfiddle/BurningFiddle/manage.py createsuperuser --noinput --username bfadmin --email rcheley@gmail.com || true echo -e ""\n>>> Load Initial Data"" python /home/burningfiddle/BurningFiddle/manage.py loaddata /home/burningfiddle/BurningFiddle/fixtures/pages.json echo -e ""\n>>> Collecting static files"" python /home/burningfiddle/BurningFiddle/manage.py collectstatic echo -e ""\n>>> Reloading Gunicorn"" systemctl daemon-reload systemctl restart gunicorn EOF echo -e ""\n>>> Finished installing Django project on server."" ",2021-02-21,automating-the-deployment,"We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server … ",Automating the deployment,https://www.ryancheley.com/2021/02/21/automating-the-deployment/ ryan,productivity,"In my last post [Auto Generating the Commit Message](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit- message/) I indicated that this post I would ""throw it all together and to get a spot where I can run one make command that will do all of this for me"". I decided to take a brief detour though as I realized I didn't have a good way to create a new post, i.e. the starting point wasn't automated! In this post I'm going to go over how I create the start to a new post using `Makefile` and the command `make newpost` My initial idea was to create a new bash script (similar to the `tweet.sh` file), but as a first iteration I went in a different direction based on this post [How to Slugify Strings in Bash](https://blog.codeselfstudy.com/blog/how- to-slugify-strings-in-bash/). The command that the is finally arrived at in the post above was newpost: vim +':r templates/post.md' $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md which was **really** close to what I needed. My static site is set up a bit differently and I'm not using `vim` (I'm using VS Code) to write my words. The first change I needed to make was to remove the use of `vim` from the command and instead use `touch` to create the file newpost: touch $(BASEDIR)/content/blog/$$(date +%Y-%m-%d)-$$(echo -n $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md The second was to change the file path for where to create the file. As I've indicated previously, the structure of my content looks like this: content ├── musings ├── pages ├── productivity ├── professional\ development └── technology giving me an updated version of the command that looks like this: touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md When I run the command `make newpost title='Automating the file creation' category='productivity'` I get a empty new files created. Now I just need to populate it with the data. There are seven bits of meta data that need to be added, but four of them are the same for each post Author: ryan Tags: Series: Remove if Not Needed Status: draft That allows me to have the `newpost` command look like this: newpost: touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Author: ryan"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Tags: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Series: Remove if Not Needed"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Status: draft"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md The remaining metadata to be added are: * Title: * Date * Slug Of these, `Date` and `Title` are the most straightforward. `bash` has a command called `date` that can be formatted in the way I want with `%F`. Using this I can get the date like this echo ""Date: $$(date +%F)"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md For `Title` I can take the input parameter `title` like this: echo ""Title: $${title}"" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md `Slug` is just `Title` but _slugified_. Trying to figure out how to do this is how I found the [article](https://blog.codeselfstudy.com/blog/how-to-slugify- strings-in-bash/) above. Using a slightly modified version of the code that generates the file, we get this: printf ""Slug: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""$${title}"" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md One thing to notice here is that `printf`. I needed/wanted to `echo -n` but `make` didn't seem to like that. [This StackOverflow answer](https://stackoverflow.com/a/14121245) helped me to get a fix (using `printf`) though I'm sure there's a way I can get it to work with `echo -n`. Essentially, since this was a first pass, and I'm pretty sure I'm going to end up re-writing this as a shell script I didn't want to spend **too** much time getting a perfect answer here. OK, with all of that, here's the entire `newpost` recipe I'm using now: newpost: touch content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Title: $${title}"" > content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Date: $$(date +%F)"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Author: ryan"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Tags: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md printf ""Slug: "" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""$${title}"" | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Series: Remove if Not Needed"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md echo ""Status: draft"" >> content/$$(echo $${category})/$$(echo $${title} | sed -e 's/[^[:alnum:]]/-/g' | tr -s '-' | tr A-Z a-z.md).md This allows me to type `make newpost` and generate a new file for me to start my new post in!1 1. When this post was originally published the slug command didn't account for making all of the text lower case. This was fixed in a subsequent [commit](https://github.com/ryancheley/ryancheley.com/commit/54f41680fdca4131735346764048d4e5fd206fd6) ↩︎ ",2022-02-02,automating-the-file-creation,"In my last post [Auto Generating the Commit Message](https://www.ryancheley.com/2022/01/28/auto-generating-the-commit- message/) I indicated that this post I would ""throw it all together and to get a spot where I can run one make command that will do all of this for me"". I decided to take a brief detour though as I … ",Automating the file creation,https://www.ryancheley.com/2022/02/02/automating-the-file-creation/ ryan,technology,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what the heck was going on. Part of what I wrote before was correct. I wasn’t able to read the `client_secrets.json` file and that was leading to an error. I was _not_ correct on the creation of the `create_mp4.sh` though. The reason I got it to run automatically that night was because I had, in my testing, created the `create_mp4.sh` and when cron ran my `run_script.sh` it was able to use what was already there. The next night when it ran, the `create_mp4.sh` was already there, but the `h264` files that were referenced in it weren’t. This lead to no video being uploaded and me being confused. The issue was that cron was unable to run the part of the script that generates the script to create the `mp4` file. I’m close to having a fix for that, but for now I did the most inelegant thing possible. I broke up the script in cron so it looks like this: 00 06 * * * /home/pi/Documents/python_projects/cleanup.sh 10 19 * * * /home/pi/Documents/python_projects/create_script_01.sh 11 19 * * * /home/pi/Documents/python_projects/create_script_02.sh >> $HOME/Documents/python_projects/create_mp4.sh 2>&1 12 19 * * * /home/pi/Documents/python_projects/create_script_03.sh 13 19 * * * /home/pi/Documents/python_projects/run_script.sh At 6am every morning the `cleanup.sh` runs and removes the `h264` files, the `mp4` file and the `create_mp4.sh` script At 7:10pm the ‘[header](https://gist.github.com/ryancheley/5b11cc15160f332811a3b3d04edf3780)’ for the `create_mp4.sh` runs. At 7:11pm the ‘[body](https://gist.github.com/ryancheley/9e502a9f1ed94e29c4d684fa9a8c035a)’ for `create_mp4.sh` runs. At 7:12pm the ‘[footer](https://gist.github.com/ryancheley/3c91a4b27094c365b121a9dc694c3486)’ for `create_mp4.sh` runs. Finally at 7:13pm the `run_script.sh` compiles the `h264` files into an `mp4` and uploads it to YouTube. Last night while I was at a School Board meeting the whole process ran on it’s own. I was super pumped when I checked my YouTube channel and saw that the May 1 hummingbird video was there and I didn’t have to do anything. ",2018-05-02,automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible,"Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what … ",Automating the Hummingbird Video Upload to YouTube or How I finally got Cron to do what I needed it to do but in the ugliest way possible,https://www.ryancheley.com/2018/05/02/automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible/ ryan,microblog,"After a week long hiatus from swimming I got back to it today. I only swam 1550 yards but it was a good swim. I kind of felt the need to take it a bit easy today given the week long break, and I needed to be at the office a bit early to get ready to help onboard a new employee. While it wasn't a great distance, or a great time (2'45"" 100 yd pace) it still felt really good to be back in the pool. I am again back to feeling 'pretty sleepy' early in the evening which I'm hoping will rid me of the [insomnia](2025/02/22/insomnia/) from last week. One of the best / weirdest parts of the swim is the honking from the geese. About 25 minutes into my swim they seem to wake up and just start honking at each other ... or maybe at me ... or maybe at the people walking around. Not really sure. It is slightly off putting. They are **very** loud, but it also makes me giggle ... so that's something. ",2025-02-24,back-in-the-pool,"After a week long hiatus from swimming I got back to it today. I only swam 1550 yards but it was a good swim. I kind of felt the need to take it a bit easy today given the week long break, and I needed to be at the office … ",Back in the pool,https://www.ryancheley.com/2025/02/24/back-in-the-pool/ ryan,musings,"Last weekend I watched both games 7 of the NBA conference finals. I have no particular affinity for the NBA (I prefer the [Madness in March associated with the NCAA](https://en.m.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament)) but I figured with 2 game 7s it might be interesting to watch. I was not wrong. On Sunday night Cleveland was hosted by Boston in a rematch of a game 7 from 2010. One of only 2 game 7s that LeBron James had lost. This game had all the makings of what you would want a game 7 to be. A young upstart rookie (Tatum) with something to prove. A veteran (James), also with something to prove. What really stuck our for me, for this game, was what happened at the 6:45 mark in the fourth quarter. Tatum dunked on LeBron (posterized is the term [ESPN](http://www.espn.com/video/clip?id=23627416) used) to put the score at 71-69 Cleveland. What happened next though, I think, is why the Cavs won the game. Tatum proceeded to bump his chest up against the back of LeBron’s shoulder, like a small child might run up to a big kid when he did something amazing to be like, “Look at me ... I’m a big kid too!” LeBron just stood there and looked at Tatum with incredulity. The announcers seemed to enjoy the specticle more than they should have. But LeBron just stood there, the Boston crowd cheering wildly at what their young rookie had just done. To dunk over LeBron, arguably one of the greatest, in a game 7? This is the thing that legends are made of. But while the crowd and the announcers saw James look like he was a mere mortal ... what I saw was the game turning around. The look on James’ face wasn’t one of ‘damn ... that kid just dunked on me. It was, “Damn ... now I’m going to get mine and I have a punk to show how this game is really played.” From that point on the Cavs outscored the Celtics 16-10 ... not a huge margin, but a margin enough to win. What the score doesn’t show is the look of determination on LeBron’s face as he carried his team to the NBA Finals. Not because he scored all 16 points (he _only_ scored 7) but because he checked his ego at the door and worked to make his team better than the other team. In short, he was the better team mate than Tatum in those last minutes and that’s why the Cavs are in the Finals and the Celtics aren’t. Tatum’s reaction to dunking on LeBron is understandable. Hell, if I had done something like that when I was his age, I would have pumped my chest up too. But it the patience and reservedness (that perhaps come with age) that make you a great player or team member. You don’t really want to rile up a great player because that’s the only reason they need to whoop your butt. Perhaps Tatum will learn this lesson. Perhaps he won’t. Because you see, acting like a a little kid isn’t just the right of a rookie. James Harden pulled some immature shenanigans too in his team’s loss to the Warriors. At one point, with the Rockets up 59-53 with 6:13 in the 3rd, Harden when for a layup and was knocked down ... accidentally in my opinion. When a player from the Warriors tried to help him up he just sat there and then flailed his arms until one of his teammates can to help him up. Big man there Harden. By the end of the 3rd quarter the Rockets were down 76-69. By the end of the game they’ve lost 101-92. You see, when it comes down to it a great teammate will do what’s best for the team, and not do what’s best for their ego. It doesn’t seem to matter, old or young, rookie or veteran, not having the ability to control your emotions at key points in a game (or in life) can be more costly than you realize. Sometimes it’s game 7 of the NBA Conference finals, sometimes it’s just a pick up game with some friends at the park, but in either case, being a good teammate requires checking your ego at the door and working to be the best team mate you can be, not being the best player on the court. To put it another way, being the smartest person in the room doesn’t make you the most influential person in the room, and when it comes down to moving ahead, being influential trumps being smart. ",2018-06-08,basketball-conference-finals-or-how-the-actions-of-one-person-can-fire-up-the-other-team-and-lead-them-to-win,"Last weekend I watched both games 7 of the NBA conference finals. I have no particular affinity for the NBA (I prefer the [Madness in March associated with the NCAA](https://en.m.wikipedia.org/wiki/NCAA_Division_I_Men%27s_Basketball_Tournament)) but I figured with 2 game 7s it might be interesting to watch. I was not wrong. On Sunday night … ",Basketball Conference Finals OR How the actions of one person can fire up the other team and lead them to win,https://www.ryancheley.com/2018/06/08/basketball-conference-finals-or-how-the-actions-of-one-person-can-fire-up-the-other-team-and-lead-them-to-win/ ryan,musings,"[Healthcare Big Data Success Starts with the Right Questions](http://healthitanalytics.com/news/healthcare-big-data-success- starts-with-the-right-questions) > > The last major piece of the puzzle is the ability to pick projects that > can bear fruit quickly, Ibrahim added, in order to jumpstart enthusiasm and > secure widespread support. * * * [Healthcare Big Data Success Starts with the Right Questions](http://healthitanalytics.com/news/healthcare-big-data-success- starts-with-the-right-questions) > > Moving from measurement to management – and from management to improvement > – was the next challenge, he added. * * * [Healthcare Big Data Success Starts with the Right Questions](http://healthitanalytics.com/news/healthcare-big-data-success- starts-with-the-right-questions) > > Each question builds upon the previous answer to create a comprehensive > portrait of how data flows throughout a segment of the organization. Ibrahim > paraphrased the survey like so: • Do we have the data and analytics to connect to the important organizations in each of these three domains? • If we have the data, is it integrated in a meaningful way? Can we look at that data and tell meaningful stories about what is happening, where it’s happening, and why it’s happening? • Even if we have the data and it’s integrated meaningfully and we can start to tell that story, do we apply some statistical methodology to the data where we aggregate and report on it? • If we have the data, and it can tell us a story, and we use good analytics methodology, are we able to present it in an understandable way to all our stakeholders, from the front-line clinician all the way up to the chief executive? • Are the analytics really meaningful? Does the information help to make decisions? Is it rich enough that we can really figure out why something is happening? • Lastly, even if we have accomplished all these other goals, can we deliver the information in a timely fashion to the people who need this data to do their jobs? ",2017-01-07,big-data-and-healthcare-thoughts,"[Healthcare Big Data Success Starts with the Right Questions](http://healthitanalytics.com/news/healthcare-big-data-success- starts-with-the-right-questions) > > The last major piece of the puzzle is the ability to pick projects that > can bear fruit quickly, Ibrahim added, in order to jumpstart enthusiasm and > secure widespread support. * * * [Healthcare Big Data Success Starts with the Right Questions](http://healthitanalytics.com/news/healthcare-big-data-success- starts-with-the-right-questions) > > Moving from measurement … ",Big Data and Healthcare - thoughts,https://www.ryancheley.com/2017/01/07/big-data-and-healthcare-thoughts/ Ryan Cheley,pages,"# Speaking / Podcasts 1. Speaker at PyCascades 2025: [Error Culture](https://youtu.be/FBMg2Bp4I-Q) 2. Speaker at DjangoCon US 2024: [Error Culture](https://2024.djangocon.us/talks/error-culture/) 3. Speaker at DjanogCon US 2023: [Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO) 4. Guest on [Test & Code episode 183](https://testandcode.com/183) where I spoke about the ""challenges of managing software teams, and how to handle them"" and other skills # OSS Work 1. Contributed to the following open source projects: * [DjangoProject.com](https://www.djangoproject.com) with [PR](https://github.com/django/django/pull/12128) which I wrote about [here](https://www.ryancheley.com/2019/12/07/my-first-commit-to-an-open-source-project-django/) * [Django](https://github.com/django/django/) with [PR](https://github.com/django/django/pull/16243) which I wrote about [here](https://www.ryancheley.com/2022/11/12/contributing-to-django/) * [DjangoPackages.org](https://djangopackages.org) * Limited TextField size to help eliminate potential for Spam, closing a 10 year old issue with [PR](https://github.com/djangopackages/djangopackages/commit/5463558eb5f6a10978158946c7867725b57d14dd) * Added support for emoji with [PR](https://github.com/djangopackages/djangopackages/commit/051c5ca14d25cb39d7d56ea63e4cfb317d78c13c) * Added Support for [Emojificate](https://pypi.org/project/emojificate/) with [PR](https://github.com/djangopackages/djangopackages/pull/849) to make emoji accessible ""with fallback images, alt text, title text and aria labels to represent emoji in HTML"" * [Tryceratops](https://pypi.org/project/tryceratops/) with [PR](https://github.com/guilatrova/tryceratops/commits?author=ryancheley) which I wrote about [here](https://www.ryancheley.com/2021/08/07/contributing-to-tryceratops/) * [Wagtail-Resume](https://pypi.org/project/wagtail-resume/) with [PR](https://github.com/adinhodovic/wagtail-resume/pull/32) * [Diagrams](https://pypi.org/project/diagrams/) with [PR](https://github.com/mingrammer/diagrams/pull/426) * [MLB-StatsAPI](https://pypi.org/project/MLB-StatsAPI/) with [PR](https://github.com/toddrob99/MLB-StatsAPI/pull/41) * [django-sql-dashboard](https://pypi.org/project/django-sql-dashboard/) with [PR](https://github.com/simonw/django-sql-dashboard/pull/138) which I wrote about [here](https://www.ryancheley.com/2021/07/09/contributing-to-django-sql-dashboard/) * [dnspython](https://pypi.org/project/dnspython/) with [PR](https://github.com/rthalley/dnspython/issues/775) * [markdown-to-sqlite](https://pypi.org/project/markdown-to-sqlite/) with [PR](https://github.com/simonw/markdown-to-sqlite/pull/3) 2. Author and Maintainer of the Open Source Projects: * [toggl-to-sqlite](https://pypi.org/project/toggl-to-sqlite/) * [the-well-maintained-test](https://pypi.org/project/the-well-maintained-test/) which I wrote about [here](https://cur.at/4n0KtYP?m=web) * The package was mentioned in [Django News Issue #104](https://django-news.com/issues/104) * The package is featured in the [Rich Gallery](https://www.textualize.io/rich/gallery/4) * [pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/) which I wrote about [here](https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/) 3. One of the Maintainers of [Django Packages](https://djangopackages.org) with [Jeff Triplett](https://github.com/jefftriplett) and [Maksudul Haque](https://fosstodon.org/@saadmk11) 4. Member of the [Python Software Foundation](https://www.python.org/users/rcheley/) 5. Member of the [Django Software Foundation](https://www.djangoproject.com/foundation/minutes/2021/nov/11/dsf-board-monthly-meeting/) 6. Navigator for [Djangonaut.space](https://djangonaut.space) * Session 1 (Jan 15, 2024 - Mar 11, 2024) * Session 2 (Jun 17, 2024 - Aug 12, 2024) * Session 4 (Feb 17, 2025 - Apr 13, 2025) 7. [Django Commons](https://github.com/django-commons/) admin # Certifications 1. [Google Cloud Platform Cloud Architect](https://www.credential.net/f8e9ee03-67cb-48e3-8d3e-d824afc6265b?key=38397759fd07a2225d694c34d34f994bcdde3b9922962d865e4e9c6df478f139) 2. Certified EDI Academy Professional # Guest Writing 1. Have been published on the [PyBites Blog](https://pybit.es/author/ryancheley/) # Other 1. Ran 13 half marathons in 13 months * SkyBorne, December 2013 * Carlsbad, January 2014 * Palm Springs, February 2014 * Zion National Park, March 2014 * La Jolla, April 2014 * Menifee, May 2014 * San Diego Rock 'n Roll, June 2014 * Fourth of July Virtual, July 2014 * America's Finest City, August 2014 * Ventura, September 2014 * San Luis Obispo, October 2014 * Santa Barbara, November 2014 * SkyBorne, December 2014 2. Member of [Bermuda Dunes Community Council](https://rivco4.org/Councils/Community-Councils), September 2009 - June 2013 3. Created a [Django site to track Stadiums](https://stadiatracker.com/Pages/home) that I've visited ",2025-04-02,brag-doc,"# Speaking / Podcasts 1. Speaker at PyCascades 2025: [Error Culture](https://youtu.be/FBMg2Bp4I-Q) 2. Speaker at DjangoCon US 2024: [Error Culture](https://2024.djangocon.us/talks/error-culture/) 3. Speaker at DjanogCon US 2023: [Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO) 4. Guest on [Test & Code episode 183](https://testandcode.com/183) where I spoke about the ""challenges … ",Brag Doc,https://www.ryancheley.com/pages/brag-doc/ ryan,microblog,"One of the great things about living in the desert of Southern california is that during the winter time the day time temps are typically in the high 60s or low 70s. This makes outdoor activities amazing experiences. What's even better is that every January / February the California Winter League gears up and my wife and will spend Saturday mornings (and sometimes afternoons) watching baseball under the gloriously beautiful sky. The best part is that the teams are filled with high school, and college hopefuls, so it's baseball in kind of its rawest form. better than little league, but not quite as good as Pro ball. And since it's a winter league with essentially made up teams, my wife and I will pick a team to root for and then spend the ensuing 7 innings trash talking each other as 'our' team is winning. Another great part is that it's a relatively inexpensive outing. Each Saturday two games are played, and for $10 for each adult you get access to both games. The games are only 7 innings long but they use wooden bats instead of aluminum bats so it feels more like pro ball than college or high school ball. And just because it's an instructional league doesn't mean there aren't some great plays made. Just today I saw a hit stealing diving catch made by a shortstop, and a diving catch into foul territory made by a right fielder that ran faster than I really thought was possible. ",2025-02-01,california-winter-league,"One of the great things about living in the desert of Southern california is that during the winter time the day time temps are typically in the high 60s or low 70s. This makes outdoor activities amazing experiences. What's even better is that every January / February the California Winter League … ",California Winter League,https://www.ryancheley.com/2025/02/01/california-winter-league/ Ryan Cheley,pages," * Wallet * iPhone 14 * Apple Watch Series 8 45mm * iPad Pro 12.9 2021 * [Tom Binh Synik 30](https://www.tombihn.com/products/synik-30?variant=42599481901245) ",2025-04-02,carry," * Wallet * iPhone 14 * Apple Watch Series 8 45mm * iPad Pro 12.9 2021 * [Tom Binh Synik 30](https://www.tombihn.com/products/synik-30?variant=42599481901245) ",Carry,https://www.ryancheley.com/pages/carry/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to False. If set to True you can show items that have dates that are in the future where the future is anything after the current date/time on the server. * date_field: the field that the view will use to filter the date on. If this is not set an error will be generated * uses_datetime_field: Convert a date into a datetime when the date field is a DateTimeField. When time zone support is enabled, `date` is assumed to be in the current time zone, so that displayed items are consistent with the URL. ### BaseDateListView Attributes * allow_empty: Defaults to `False`. This means that if there is no data a `404` error will be returned with the message > > `No __str__ Available` where ‘`__str__`’ is the display of your model * date_list_period: This attribute allows you to break down by a specific period of time (years, months, days, etc.) and group your date driven items by the period specified. See below for implementation For `year` views.py date_list_period='year' urls.py Nothing special needs to be done \.html {% block content %}
{% for date in date_list %} {{ date.year }}
    {% for p in person %} {% if date.year == p.post_date.year %}
  • {{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}
  • {% endif %} {% endfor %}
{% endfor %}
{% endblock %} Will render: ![Rendered Archive Index View](/images/uploads/2019/11/634B59DC-6BA6-4C5F-B969-E8B924123FFA.jpeg) For `month` views.py date_list_period='month' urls.py Nothing special needs to be done \.html {% block content %}
{% for date in date_list %} {{ date.month }}
    {% for p in person %} {% if date.month == p.post_date.month %}
  • {{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}
  • {% endif %} {% endfor %}
{% endfor %}
{% endblock %} Will render: ![BaseArchiveIndexView](/images/uploads/2019/11/04B40CD4-3B85-440D-810D-4050727D6120.jpeg) ### BaseArchiveIndexView Attributes * context_object_name: Name the object used in the template. As stated before, you’re going to want to do this so you don’t hate yourself (or have other developers hate you). ## Other Attributes ### MultipleObjectMixin Attributes These attributes were all reviewed in the [ListView](/cbv-listview.html) post * model = None * ordering = None * page_kwarg = 'page' * paginate_by = None * paginate_orphans = 0 * paginator_class = \ * queryset = None ### TemplateResponseMixin Attributes This attribute was reviewed in the [ListView](/cbv-listview.html) post * content_type = None ### ContextMixin Attributes This attribute was reviewed in the [ListView](/cbv-listview.html) post * extra_context = None ### View Attributes This attribute was reviewed in the [View](/cbv-view.html) post * http_method_names = ['get', 'post', 'put', 'patch', 'delete', 'head', 'options', 'trace'] ### TemplateResponseMixin Attributes These attributes were all reviewed in the [ListView](/cbv-listview.html) post * response_class = \ * template_engine = None * template_name = None ## Diagram A visual representation of how `ArchiveIndexView` is derived can be seen here: ![ArchiveIndexView](https://yuml.me/diagram/plain;/class/%5BMultipleObjectTemplateResponseMixin%7Bbg:white%7D%5D%5E-%5BArchiveIndexView%7Bbg:green%7D%5D,%20%5BTemplateResponseMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectTemplateResponseMixin%7Bbg:white%7D%5D,%20%5BBaseArchiveIndexView%7Bbg:white%7D%5D%5E-%5BArchiveIndexView%7Bbg:green%7D%5D,%20%5BBaseDateListView%7Bbg:white%7D%5D%5E-%5BBaseArchiveIndexView%7Bbg:white%7D%5D,%20%5BMultipleObjectMixin%7Bbg:white%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D,%20%5BContextMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectMixin%7Bbg:white%7D%5D,%20%5BDateMixin%7Bbg:white%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D,%20%5BView%7Bbg:lightblue%7D%5D%5E-%5BBaseDateListView%7Bbg:white%7D%5D.svg) ## Conclusion With date driven data (articles, blogs, etc.) The `ArchiveIndexView` is a great CBV and super easy to implement. ",2019-11-24,cbv-archiveindexview,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to … ",CBV - ArchiveIndexView,https://www.ryancheley.com/2019/11/24/cbv-archiveindexview/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView or other views representing lists of > objects. Almost all of the functionality of `BaseListView` comes from the `MultipleObjectMixin`. Since the Django Docs specifically say don’t use this directly, I won’t go into it too much. ## Diagram A visual representation of how `BaseListView` is derived can be seen here: ![BaseListView](https://yuml.me/diagram/plain;/class/%5BMultipleObjectMixin%7Bbg:white%7D%5D%5E-%5BBaseListView%7Bbg:green%7D%5D,%20%5BContextMixin%7Bbg:white%7D%5D%5E-%5BMultipleObjectMixin%7Bbg:white%7D%5D,%20%5BView%7Bbg:lightblue%7D%5D%5E-%5BBaseListView%7Bbg:green%7D%5D.svg) ## Conclusion Don’t use this. It should be subclassed into a usable view (a la `ListView`). There are many **Base** views that are ancestors for other views. I’m not going to cover any more of them going forward **UNLESS** the documentation says there’s a specific reason to. ",2019-11-17,cbv-baselistview,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView … ",CBV - BaseListView,https://www.ryancheley.com/2019/11/17/cbv-baselistview/ ryan,technology,"From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/) `CreateView` > > View for creating a new object, with a response rendered by a template. ## Attributes Three attributes are required to get the template to render. Two we’ve seen before (`queryset` and `template_name`). The new one we haven’t see before is the `fields` attribute. * fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields ## Example views.py queryset = Person.objects.all() fields = '__all__' template_name = 'rango/person_form.html' urls.py path('create_view/', views.myCreateView.as_view(), name='create_view'), \