content
121 rows where category = "technology"
This data as json, CSV (advanced)
Suggested facets: published_date (date)
slug ▼ | author | category | content | published_date | summary | title | url |
---|---|---|---|---|---|---|---|
adding-my-raspberry-pi-project-code-to-github | ryan | technology | Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was a bad idea. (Duh) I spent the better part of an hour trying to find a script that I had accidentally deleted somewhere in my blog. Turns out it was (mostly) there, but it didn’t ‘feel’ right … though I’m not sure why. I was able to restore the file from my blog archive, but I decided that was a dumb way to live and given that 1. I use version control at work (and have for the last 15 years) 2. I’ve used it for other personal projects However, I’ve only ever used a GUI version of either subversion (at work) or GitHub (for personal projects via PyCharm). I’ve never used it from the command line. And so, with a bit of time on my hands I dove in to see what needed to be done. Turns out, not much. I used this [GitHub](https://help.github.com/articles/adding-an-existing-project-to- github-using-the-command-line/) resource to get me what I needed. Only a couple of commands and I was in business. The problem is that I have a terrible memory and this isn’t something I’m going to do very often. So, I decided to write a bash script to encapsulate all of the commands and help me out a bit. The script looks like this: echo "Enter your commit message:" read commit_msg git commit -m "$commit_msg" git remote add origin path/to/repository git remote -v git push -u origin master git add $1 echo ”enter your commit message:” read commit_msg git commit -m ”$commit_msg” git push I just recently learned about user input in bash scripts and was really excited about the opportunity to be able to use it. Turns out it didn’t take long to try it out! (God I love learning things!) What the script does is commits the files that have been changed (all of them), adds … | 2018-11-25 | Over the long holiday weekend I had the opportunity to play around a bit with some of my Raspberry Pi scripts and try to do some fine tuning. I mostly failed in getting anything to run better, but I did discover that not having my code in version control was … | Adding my Raspberry Pi Project code to GitHub | https://www.ryancheley.com/2018/11/25/adding-my-raspberry-pi-project-code-to-github/ |
adding-search-to-my-pelican-blog-with-datasette | ryan | technology | Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin called [tipue-search](https://github.com/pelican- plugins/tipue-search) but when I was looking at it I could tell it was on it's last legs. I thought about it, and since my blag isn't super high trafficked AND you can use google to search a specific site, I could wait a bit and see what options came up. After waiting a few months, I decided it would be interesting to see if I could write a SQLite utility to get the data from my blog, add it to a SQLite database and then use [datasette](https://datasette.io) to serve it up. I wrote the beginning scaffolding for it last August in a utility called [pelican-to-sqlite](https://pypi.org/project/pelican-to-sqlite/0.1/), but I ran into several technical issues I just couldn't overcome. I thought about giving up, but sometimes you just need to take a step away from a thing, right? After the first of the year I decided to revisit my idea, but first looked to see if there was anything new for Pelican search. I found a tool plugin called [search](https://github.com/pelican-plugins/search) that was released last November and is actively being developed, but as I read through the documentation there was just **A LOT** of stuff: * stork * requirements for the structure of your page html * static asset hosting * deployment requires updating your `nginx` settings These all looked a bit scary to me, and since I've done some work using [datasette](https://datasette.io) I thought I'd revisit my initial idea. ## My First Attempt As I mentioned above, I wrote the beginning scaffolding late last summer. In my first attempt I tried to use a few tools to read the `md` files and … | 2022-01-16 | Last summer I migrated my blog from [Wordpress](https://wordpress.com) to [Pelican](https://getpelican.com). I did this for a couple of reasons (see my post [here](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/)), but one thing that I was a bit worried about when I migrated was that Pelican's offering for site search didn't look promising. There was an outdated plugin … | Adding Search to My Pelican Blog with Datasette | https://www.ryancheley.com/2022/01/16/adding-search-to-my-pelican-blog-with-datasette/ |
an-update-to-my-first-python-script | ryan | technology | Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the code on `pythonista` on my iPad Pro and verify that it would run. It took some doing (mostly because I _forgot_ that the attributes in an `img` tag included what I needed ... initially I was trying to programmatically get the name of the person from the image file itelf using [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) ... it didn't work out well). Once that was done I branched the `master` on GitHub into a `development` branch and copied the changes there. Once that was done I performed a **pull request** on the macOS GitHub Desktop Application. Finally, I used the macOS GitHub app to merge my **pull request** from `development` into `master` and now have the changes. The updated script will now also get the image data to display into the multi markdown table: | Name | Title | Image | | --- | --- | --- | |Mike Cheley|CEO/Creative Director|| |Ozzy|Official Greeter|| |Jay Sant|Vice President|| |Shawn Isaac|Vice President|| |Jason Gurzi|SEM Specialist|| |Yvonne Valles|Director of First Impressions|| |Ed Lowell|Senior Designer|| … | 2016-10-22 | Nothing can ever really be considered **done** when you're talking about programming, right? I decided to try and add images to the [python script I wrote last week](https://github.com/miloardot/python- files/commit/e603eb863dbba169938b63df3fa82263df942984) and was able to do it, with not too much hassel. The first thing I decided to do was to update the … | An Update to my first Python Script | https://www.ryancheley.com/2016/10/22/an-update-to-my-first-python-script/ |
automating-the-deployment | ryan | technology | We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server (at some point I’ll write something up about about multiple Django Sites on the same server and part of this will still apply then). How can you do it? Well you’ll want to write yourself some scripts! I have a mix of Python and Shell scripts set up to do this. They are a bit piece meal, but they also allow me to run specific parts of the process without having to try and execute a script with ‘commented’ out pieces. **Python Scripts** create_server.py destroy_droplet.py **Shell Scripts** copy_for_deploy.sh create_db.sh create_server.sh deploy.sh deploy_env_variables.sh install-code.sh setup-server.sh setup_nginx.sh setup_ssl.sh super.sh upload-code.sh The Python script `create_server.py` looks like this: # create_server.py import requests import os from collections import namedtuple from operator import attrgetter from time import sleep Server = namedtuple('Server', 'created ip_address name') doat = os.environ['DIGITAL_OCEAN_ACCESS_TOKEN'] # Create Droplet headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {doat}', } data = <data_keys> print('>>> Creating Server') requests.post('https://api.digitalocean.com/v2/droplets', headers=headers, data=data) print('>>> Server Created') print('>>> Waiting for Server Stand up') sleep(90) print('>>> Getting Droplet Data') params = ( ('page', '1'), ('per_page', '10'), ) get_droplets = requests.get('https://api.digitalocean.com/v2/droplets', headers=headers, params=params) server_list = [] for d in get_droplets.json()['droplets']: server_list.append(Server(d['created_at'], d['networks']['v4'][0]['ip_address'], d['name'… | 2021-02-21 | We got everything set up, and now we want to automate the deployment. Why would we want to do this you ask? Let’s say that you’ve decided that you need to set up a test version of your site (what some might call UAT) on a new server … | Automating the deployment | https://www.ryancheley.com/2021/02/21/automating-the-deployment/ |
automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible | ryan | technology | Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what the heck was going on. Part of what I wrote before was correct. I wasn’t able to read the `client_secrets.json` file and that was leading to an error. I was _not_ correct on the creation of the `create_mp4.sh` though. The reason I got it to run automatically that night was because I had, in my testing, created the `create_mp4.sh` and when cron ran my `run_script.sh` it was able to use what was already there. The next night when it ran, the `create_mp4.sh` was already there, but the `h264` files that were referenced in it weren’t. This lead to no video being uploaded and me being confused. The issue was that cron was unable to run the part of the script that generates the script to create the `mp4` file. I’m close to having a fix for that, but for now I did the most inelegant thing possible. I broke up the script in cron so it looks like this: 00 06 * * * /home/pi/Documents/python_projects/cleanup.sh 10 19 * * * /home/pi/Documents/python_projects/create_script_01.sh 11 19 * * * /home/pi/Documents/python_projects/create_script_02.sh >> $HOME/Documents/python_projects/create_mp4.sh 2>&1 12 19 * * * /home/pi/Documents/python_projects/create_script_03.sh 13 19 * * * /home/pi/Documents/python_projects/run_script.sh At 6am every morning the `cleanup.sh` runs and removes the `h264` files, the `mp4` file and the `create_mp4.sh` script At 7:10pm the ‘[header](https://gist.github.com/ryancheley/5b11cc15160f332811a3b3d04edf3780)’ for the `create_mp4.sh` runs. At 7:11pm the ‘[body](https://gist.github.com/ryancheley/9e502a9f1ed94e29c4d684fa9a8c035a)’ for `create_mp4.sh` runs. At 7:12pm the ‘[footer](https://gist.github.com/ryancheley/3c91a4b27094c365b121a9dc694c3486)’ for `create_… | 2018-05-02 | Several weeks ago in [Cronjob Redux](/cronjob-redux.html) I wrote that I had _finally_ gotten Cron to automate the entire process of compiling the `h264` files into an `mp4` and uploading it to [YouTube](https://www.youtube.com). I hadn’t. And it took the better part of the last 2 weeks to figure out what … | Automating the Hummingbird Video Upload to YouTube or How I finally got Cron to do what I needed it to do but in the ugliest way possible | https://www.ryancheley.com/2018/05/02/automating-the-hummingbird-video-upload-to-youtube-or-how-i-finally-got-cron-to-do-what-i-needed-it-to-do-but-in-the-ugliest-way-possible/ |
cbv-archiveindexview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to False. If set to True you can show items that have dates that are in the future where the future is anything after the current date/time on the server. * date_field: the field that the view will use to filter the date on. If this is not set an error will be generated * uses_datetime_field: Convert a date into a datetime when the date field is a DateTimeField. When time zone support is enabled, `date` is assumed to be in the current time zone, so that displayed items are consistent with the URL. ### BaseDateListView Attributes * allow_empty: Defaults to `False`. This means that if there is no data a `404` error will be returned with the message > > `No __str__ Available` where ‘`__str__`’ is the display of your model * date_list_period: This attribute allows you to break down by a specific period of time (years, months, days, etc.) and group your date driven items by the period specified. See below for implementation For `year` views.py date_list_period='year' urls.py Nothing special needs to be done \<file_name_>.html {% block content %} <div> {% for date in date_list %} {{ date.year }} <ul> {% for p in person %} {% if date.year == p.post_date.year %} <li>{{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}</li> {% endif %} {% endfor %} </ul> {% endfor %} </div> {% endblock %} Will render:  … | 2019-11-24 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/ArchiveIndexView/) `ArchiveIndexView` > > Top-level archive of date-based items. ## Attributes There are 20 attributes that can be set for the `ArchiveIndexView` but most of them are based on ancestral Classes of the CBV so we won’t be going into them in Detail. ### DateMixin Attributes * allow_future: Defaults to … | CBV - ArchiveIndexView | https://www.ryancheley.com/2019/11/24/cbv-archiveindexview/ |
cbv-baselistview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView or other views representing lists of > objects. Almost all of the functionality of `BaseListView` comes from the `MultipleObjectMixin`. Since the Django Docs specifically say don’t use this directly, I won’t go into it too much. ## Diagram A visual representation of how `BaseListView` is derived can be seen here:  ## Conclusion Don’t use this. It should be subclassed into a usable view (a la `ListView`). There are many **Base** views that are ancestors for other views. I’m not going to cover any more of them going forward **UNLESS** the documentation says there’s a specific reason to. | 2019-11-17 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/BaseListView/) `BaseListView` > > A base view for displaying a list of objects. And from the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class- based-views/generic-display/#listview): > > A base view for displaying a list of objects. It is not intended to be > used directly, but rather as a parent class of the > django.views.generic.list.ListView … | CBV - BaseListView | https://www.ryancheley.com/2019/11/17/cbv-baselistview/ |
cbv-createview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/CreateView/) `CreateView` > > View for creating a new object, with a response rendered by a template. ## Attributes Three attributes are required to get the template to render. Two we’ve seen before (`queryset` and `template_name`). The new one we haven’t see before is the `fields` attribute. * fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields ## Example views.py queryset = Person.objects.all() fields = '__all__' template_name = 'rango/person_form.html' urls.py path('create_view/', views.myCreateView.as_view(), name='create_view'), \<template>.html {% extends 'base.html' %} <h1> {% block title %} {{ title }} {% endblock %} </h1> {% block content %} <h3>{{ type }} View</h3> <form action="." method="post"> {% csrf_token %} <table> {{ form.as_p }} </table> <button type="submit">SUBMIT</button> </form> {% endblock %} ## Diagram A visual representation of how `CreateView` is derived can be seen here:  `CreateView` > > View for creating a new object, with a response rendered by a template. ## Attributes Three attributes are required to get the template to render. Two we’ve seen before (`queryset` and `template_name`). The new one we haven’t see before is the `fields` attribute … | CBV - CreateView | https://www.ryancheley.com/2019/12/01/cbv-createview/ |
cbv-dayarchiveview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/) `DayArchiveView` > > List of objects published on a given day. ## Attributes There are six new attributes to review here … well really 3 new ones and then a formatting attribute for each of these 3: * day: The day to be viewed * day_format: The format of the day to be passed. Defaults to `%d` * month: The month to be viewed * month_format: The format of the month to be passed. Defaults to `%b` * year: The year to be viewed * year_format: The format of the year to be passed. Defaults to `%Y` ## Required Attributes * day * month * year * date_field: The field that holds the date that will drive every else. We saw this in [ArchiveIndexView](/cbv-archiveindexview) Additionally you also need `model` or `queryset` The `day`, `month`, and `year` can be passed via `urls.py` so that they do’t need to be specified in the view itself. ## Example: views.py class myDayArchiveView(DayArchiveView): month_format = '%m' date_field = 'post_date' queryset = Person.objects.all() context_object_name = 'person' paginate_by = 10 page_kwarg = 'name' urls.py path('day_archive_view/<int:year>/<int:month>/<int:day>/', views.myDayArchiveView.as_view(), name='day_archive_view'), \<model_name>_archiveday.html {% extends 'base.html' %} <h1> {% block title %} {{ title }} {% endblock %} </h1> {% block content %} <div> <ul> {% for p in person %} <li><a href="{% url 'rango:detail_view' p.first_name %}">{{ p.post_date }}: {{ p.first_name }} {{ p.last_name }}</a></li> {% endfor %} </ul> </div> <div class=""> {% if is_paginated %} <ul class="mui-list--inline mui--text-body2"> {% if page_obj.has_previous %} <li><a h… | 2019-11-27 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.dates/DayArchiveView/) `DayArchiveView` > > List of objects published on a given day. ## Attributes There are six new attributes to review here … well really 3 new ones and then a formatting attribute for each of these 3: * day: The day to be viewed * day_format: The format of the day … | CBV - DayArchiveView | https://www.ryancheley.com/2019/11/27/cbv-dayarchiveview/ |
cbv-deleteview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/) `DeleteView` > > View for deleting an object retrieved with self.get*object(), with a * response rendered by a template. ## Attributes There are no new attributes, but 2 that we’ve seen are required: (1) `queryset` or `model`; and (2) `success_url` ## Example views.py class myDeleteView(DeleteView): queryset = Person.objects.all() success_url = reverse_lazy('rango:list_view') urls.py path('delete_view/<int:pk>', views.myDeleteView.as_view(), name='delete_view'), \<template_name>.html Below is just the form that would be needed to get the delete to work. <form method="post"> {% csrf_token %} <table border="1"> <tr> <th>First Name</th> <th>Last Name</th> </tr> <tr> <td>{{ person.first_name }}</td> <td>{{ person.last_name }}</td> </tr> </table> <div> <a href="{% url 'rango:list_view' %}">Back</a> <input type="submit" value="Delete"> </div> </form> ## Diagram A visual representation of how `DeleteView` is derived can be seen here:  ## Conclusion As far as implement… | 2019-12-11 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/DeleteView/) `DeleteView` > > View for deleting an object retrieved with self.get*object(), with a * response rendered by a template. ## Attributes There are no new attributes, but 2 that we’ve seen are required: (1) `queryset` or `model`; and (2) `success_url` ## Example views.py class myDeleteView(DeleteView … | CBV - DeleteView | https://www.ryancheley.com/2019/12/11/cbv-deleteview/ |
cbv-detailview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/) `DetailView` > > Render a "detail" view of an object. >> >> By default this is a model instance looked up from `self.queryset`, but the view will support display of _any_ object by overriding `self.get_object()`. There are 7 attributes for the `DetailView` that are derived from the `SingleObjectMixin`. I’ll talk about five of them and the go over the ‘slug’ fields in their own section. * context_object_name: similar to the `ListView` it allows you to give a more memorable name to the object in the template. You’ll want to use this if you want to have future developers (i.e. you) not hate you * model: similar to the `ListView` except it only returns a single record instead of all records for the model based on a filter parameter passed via the `slug` * pk_url_kwarg: you can set this to be something other than pk if you want … though I’m not sure why you’d want to * query_pk_and_slug: The Django Docs have a pretty clear explanation of what it does > > This attribute can help mitigate [insecure direct object > reference](https://www.owasp.org/index.php/Top_10_2013-A4-Insecure_Direct_Object_References) > attacks. When applications allow access to individual objects by a > sequential primary key, an attacker could brute-force guess all URLs; > thereby obtaining a list of all objects in the application. If users with > access to individual objects should be prevented from obtaining this list, > setting query _pk_ and*slug to True will help prevent the guessing of URLs > as each URL will require two correct, non-sequential arguments. Simply using > a unique slug may serve the same purpose, but this scheme allows you to have > non-unique slugs. * * queryset: used to return data to the view. It will supersede the value supplied for `model` if both are present ## The Slug Fields There are two attributes that I want to talk about separately from the others: * slug_field * slug_url_kwarg If n… | 2019-11-24 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.detail/DetailView/) `DetailView` > > Render a "detail" view of an object. >> >> By default this is a model instance looked up from `self.queryset`, but the view will support display of _any_ object by overriding `self.get_object()`. There are 7 attributes for the `DetailView` that are derived from the … | CBV - DetailView | https://www.ryancheley.com/2019/11/24/cbv-detailview/ |
cbv-formview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/) `FormView` > > A view for displaying a form and rendering a template response. ## Attributes The only new attribute to review this time is `form_class`. That being said, there are a few implementation details to cover * form_class: takes a Form class and is used to render the form on the `html` template later on. ## Methods Up to this point we haven’t really needed to override a method to get any of the views to work. This time though, we need someway for the view to verify that the data is valid and then save it somewhere. * form_valid: used to verify that the data entered is valid and then saves to the database. Without this method your form doesn’t do anything ## Example This example is a bit more than previous examples. A new file called `forms.py` is used to define the form that will be used. forms.py from django.forms import ModelForm from rango.models import Person class PersonForm(ModelForm): class Meta: model = Person exclude = [ 'post_date', ] views.py class myFormView(FormView): form_class = PersonForm template_name = 'rango/person_form.html' extra_context = { 'type': 'Form' } success_url = reverse_lazy('rango:list_view') def form_valid(self, form): person = Person.objects.create( first_name=form.cleaned_data['first_name'], last_name=form.cleaned_data['last_name'], post_date=datetime.now(), ) return super(myFormView, self).form_valid(form) urls.py path('form_view/', views.myFormView.as_view(), name='form_view'), \<template_name>.html <h3>{{ type }} View</h3> {% if type != 'Update' %} <form action="." method="post"> {% else %} <form action="{… | 2019-12-04 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/FormView/) `FormView` > > A view for displaying a form and rendering a template response. ## Attributes The only new attribute to review this time is `form_class`. That being said, there are a few implementation details to cover * form_class: takes a Form class and is used to render the … | CBV - FormView | https://www.ryancheley.com/2019/12/04/cbv-formview/ |
cbv-listview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/) `ListView`: > > Render some list of objects, set by `self.model` or `self.queryset`. >> >> `self.queryset` can actually be any iterable of items, not just a queryset. There are 16 attributes for the `ListView` but only 2 types are required to make the page return something other than a [500](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#5xx_Server_errors) error: * Data * Template Name ## Data Attributes You have a choice of either using `Model` or `queryset` to specify **what** data to return. Without it you get an error. The `Model` attribute gives you less control but is easier to implement. If you want to see ALL of the records of your model, just set model = ModelName However, if you want to have a bit more control over what is going to be displayed you’ll want to use `queryset` which will allow you to add methods to the specified model, ie `filter`, `order_by`. queryset = ModelName.objects.filter(field_name='filter') If you specify both `model` and `queryset` then `queryset` takes precedence. ## Template Name Attributes You have a choice of using `template_name` or `template_name_suffix`. The `template_name` allows you to directly control what template will be used. For example, if you have a template called `list_view.html` you can specify it directly in `template_name`. `template_name_suffix` will calculate what the template name should be by using the app name, model name, and appending the value set to the `template_name_suffix`. In pseudo code: templates/<app_name>/<model_name>_<template_name_suffix>.html For an app named `rango` and a model named `person` setting `template_name_suffix` to `_test` would resolve to templates/rango/person_test.html ## Other Attributes If you want to return something interesting you’ll also need to specify * allow_empty: The default for this is true which a… | 2019-11-17 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.list/ListView/) `ListView`: > > Render some list of objects, set by `self.model` or `self.queryset`. >> >> `self.queryset` can actually be any iterable of items, not just a queryset. There are 16 attributes for the `ListView` but only 2 types are required to make the page return something … | CBV - ListView | https://www.ryancheley.com/2019/11/17/cbv-listview/ |
cbv-loginview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/) `LoginView` > > Display the login form and handle the login action. ## Attributes * authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just account creation, i.e. account verification must be done as well. For details see [example](https://simpleisbetterthancomplex.com/tips/2016/08/12/django-tip-10-authentication-form-custom-login-policy.html) by Vitor Freitas for more details * form_class: The form that will be used by the template created. Defaults to Django’s `AuthenticationForm` * redirect_authenticated_user: If the user is logged in then when they attempt to go to your login page it will redirect them to the `LOGIN_REDIRECT_URL` configured in your `settings.py` * redirect_field_name: similar idea to updating what the `next` field will be from the `DetailView`. If this is specified then you’ll most likely need to create a custom login template. * template_name: The default value for this is `registration\login.html`, i.e. a file called `login.html` in the `registration` directory of the `templates` directory. There are no required attributes for this view, which is nice because you can just add `pass` to the view and you’re set (for the view anyway you still need an html file). You’ll also need to update `settings.py` to include a value for the `LOGIN_REDIRECT_URL`. ### Note on redirect_field_name Per the [Django Documentation](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.decorators.login_required): > > If the user isn’t logged in, redirect to settings.LOGIN*URL, passing the > current absolute path in the query string. Example: > /accounts/login/?next=/polls/3/. * If `redirect_field_name` is set then the URL would be: /accounts/login/?<redirect_field_name>=/polls/3 Basically,… | 2019-12-15 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LoginView/) `LoginView` > > Display the login form and handle the login action. ## Attributes * authentication_form: Allows you to subclass `AuthenticationForm` if needed. You would want to do this IF you need other fields besides username and password for login OR you want to implement other logic than just … | CBV - LoginView | https://www.ryancheley.com/2019/12/15/cbv-loginview/ |
cbv-logoutview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/) `LogoutView` > > Log out the user and display the 'You are logged out' message. ## Attributes * next_page: redirects the user on logout. * [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the given GET parameter is passed. 1 * template_name: defaults to `registration\logged_out.html`. Even if you don’t have a template the view does get rendered but it uses the default Django skin. You’ll want to create your own to allow the user to logout AND to keep the look and feel of the site. ## Example views.py class myLogoutView(LogoutView): pass urls.py path('logout_view/', views.myLogoutView.as_view(), name='logout_view'), registrationlogged_out.html {% extends "base.html" %} {% load i18n %} {% block content %} <p>{% trans "Logged out" %}</p> {% endblock %} ## Diagram A visual representation of how `LogoutView` is derived can be seen here: Image Link from CCBV YUML goes here ## Conclusion I’m not sure how it could be much easier to implement a logout page. 1. Per Django Docs ↩︎ | 2019-12-15 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/LogoutView/) `LogoutView` > > Log out the user and display the 'You are logged out' message. ## Attributes * next_page: redirects the user on logout. * [redirect_field_name](https://docs.djangoproject.com/en/2.2/topics/auth/default/#django.contrib.auth.views.LogoutView): The name of a GET field containing the URL to redirect to after log out. Defaults to next. Overrides the next_page URL if the … | CBV - LogoutView | https://www.ryancheley.com/2019/12/15/cbv-logoutview/ |
cbv-passwordchangedoneview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeDoneView/) `PasswordChangeDoneView` > > Render a template. Pass keyword arguments from the URLconf to the context. ## Attributes * template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site. * title: the default uses the function `gettext_lazy()` and passes the string ‘Password change successful’. The function `gettext_lazy()` will translate the text into the local language if a translation is available. I’d just keep the default on this. ## Example views.py class myPasswordChangeDoneView(PasswordChangeDoneView): pass urls.py path('password_change_done_view/', views.myPasswordChangeDoneView.as_view(), name='password_change_done_view'), password_change_done.html {% extends "base.html" %} {% load i18n %} {% block content %} <h1> {% block title %} {{ title }} {% endblock %} </h1> <p>{% trans "Password changed" %}</p> {% endblock %} settings.py LOGIN_URL = '/<app_name>/login_view/' The above assumes that have this set up in your `urls.py` ## Special Notes You need to set the `URL_LOGIN` value in your `settings.py`. It defaults to `/accounts/login/`. If that path isn’t valid you’ll get a 404 error. ## Diagram A visual representation of how `PasswordChangeDoneView` is derived can be seen here:  `PasswordChangeDoneView` > > Render a template. Pass keyword arguments from the URLconf to the context. ## Attributes * template_name: Much like the `LogoutView` the default view is the Django skin. Create your own `password_change_done.html` file to keep the user experience consistent across the site. * title: the default uses … | CBV - PasswordChangeDoneView | https://www.ryancheley.com/2019/12/25/cbv-passwordchangedoneview/ |
cbv-passwordchangeview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/) `PasswordChangeView` > > A view for displaying a form and rendering a template response. ## Attributes * form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm` * success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this. The default is to use Django’s but unless you have a top level `urls.py` has the name of `password_change_done` you’ll get an error. * title: defaults to ‘Password Change’ and is translated into local language ## Example views.py class myPasswordChangeView(PasswordChangeView): success_url = reverse_lazy('rango:password_change_done_view') urls.py path('password_change_view/', views.myPasswordChangeView.as_view(), name='password_change_view'), password_change_form.html {% extends "base.html" %} {% load i18n %} {% block content %} <h1> {% block title %} {{ title }} {% endblock %} </h1> <p>{% trans "Password changed" %}</p> {% endblock %} ## Diagram A visual representation of how `PasswordChangeView` is derived can be seen here:  ## Conclusion The only thing to keep in mind here is the success_ur… | 2019-12-22 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.contrib.auth.views/PasswordChangeView/) `PasswordChangeView` > > A view for displaying a form and rendering a template response. ## Attributes * form_class: The form that will be used by the template created. Defaults to Django’s `PasswordChangeForm` * success_url: If you’ve created your own custom PasswordChangeDoneView then you’ll need to update this … | CBV - PasswordChangeView | https://www.ryancheley.com/2019/12/22/cbv-passwordchangeview/ |
cbv-redirectview | ryan | technology | From [Classy Class Based View](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/RedirectView/) the `RedirectView` will > > Provide a redirect on any GET request. It is an extension of `View` and has 5 attributes: * http_method_names (from `View`) * pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used. * permanent: a flag to determine if the redirect is permanent or not. If set to `True`, then the [HTTP Status Code](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#3xx_Redirection) [301](https://en.wikipedia.org/wiki/HTTP_301) is returned. If set to `False` the [302](https://en.wikipedia.org/wiki/HTTP_302) is returned * query_string: If `True` then it will pass along the query string from the RedirectView. If it’s `False` it won’t. If this is set to `True` and neither `pattern\_name` nor `url` are set then nothing will be passed to the `RedirectView` * url: Where the Redirect should point. It will take precedence over the patter_name so you should only `url` or `pattern\_name` but not both. This will need to be an absolute url, not a relative one, otherwise you may get a [404](https://en.wikipedia.org/wiki/HTTP_404) error The example below will give a `301` status code: class myRedirectView(RedirectView): pattern_name = 'rango:template_view' permanent = True query_string = True While this would be a `302` status code: class myRedirectView(RedirectView): pattern_name = 'rango:template_view' permanent = False query_string = True ## Methods The method `get\_redirect\_url` allows you to perform actions when the redirect is called. From the [Django Docs](https://docs.djangoproject.com/en/2.2/ref/class-based- views/base/#redirectview) the example given is increasing a counter on an Article Read value. ## Diagram A visual representation of how `RedirectView` derives from `View` 2  the `RedirectView` will > > Provide a redirect on any GET request. It is an extension of `View` and has 5 attributes: * http_method_names (from `View`) * pattern_name: The name of the URL pattern to redirect to. 1 This will be used if no `url` is used. * permanent: a … | CBV - RedirectView | https://www.ryancheley.com/2019/11/10/cbv-redirectview/ |
cbv-template-view | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/) the `TemplateView` will > > Render a template. Pass keyword arguments from the URLconf to the context. It is an extended version of the `View` CBV with the the `ContextMixin` and the `TemplateResponseMixin` added to it. It has several attributes that can be set * content_type: will allow you to define the MIME type that the page will return. The default is `DEFAULT\_CONTENT\_TYPE` but can be overridden with this attribute. * extra_context: this can be used as a keyword argument in the `as\_view()` but not in the class of the CBV. Adding it there will do nothing * http_method_name: derived from `View` and has the same definition * response_classes: The response class to be returned by render_to_response method it defaults to a TemplateResponse. See below for further discussion * template_engine: can be used to specify which template engine to use IF you have configured the use of multiple template engines in your `settings.py` file. See the [Usage](https://docs.djangoproject.com/en/2.2/topics/templates/#usage) section of the Django Documentation on Templates * template_name: this attribute is required IF the method `get\_template\_names()` is not used. ## More on `response_class` This confuses the ever living crap out of me. The best (only) explanation I have found is by GitHub user `spapas` in his article [Django non-HTML responses](https://spapas.github.io/2014/09/15/django-non-html- responses/#rendering-to-non-html): > > From the previous discussion we can conclude that if your non-HTML > response needs a template then you just need to create a subclass of > TemplateResponse and assign it to the response _class attribute (and also > change the content_ type attribute). On the other hand, if your non-HTML > respond does not need a template to be rendered then you have to override > render _to_ response completely (since the template parameter does not need > to be passed now) and eith… | 2019-11-03 | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.base/TemplateView/) the `TemplateView` will > > Render a template. Pass keyword arguments from the URLconf to the context. It is an extended version of the `View` CBV with the the `ContextMixin` and the `TemplateResponseMixin` added to it. It has several attributes that can be set * content_type: will allow … | CBV - Template View | https://www.ryancheley.com/2019/11/03/cbv-template-view/ |
cbv-updateview | ryan | technology | From [Classy Class Based Views](http://ccbv.co.uk/projects/Django/2.2/django.views.generic.edit/UpdateView/) `UpdateView` > > View for updating an object, with a response rendered by a template. ## Attributes Two attributes are required to get the template to render. We’ve seen `queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a brief refresher * fields: specifies what fields from the model or queryset will be displayed on the rendered template. You can you set `fields` to `__all__` if you want to return all of the fields * success_url: you’ll want to specify this after the record has been updated so that you know the update was made. ## Example views.py class myUpdateView(UpdateView): queryset = Person.objects.all() fields = '__all__' extra_context = { 'type': 'Update' } success_url = reverse_lazy('rango:list_view') urls.py path('update_view/<int:pk>', views.myUpdateView.as_view(), name='update_view'), \<template>.html {% block content %} <h3>{{ type }} View</h3> {% if type == 'Create' %} <form action="." method="post"> {% else %} <form action="{% url 'rango:update_view' object.id %}" method="post"> {% endif %} {% csrf_token %} <table> {{ form.as_p }} </table> <button type="submit">SUBMIT</button> </form> {% endblock %} ## Diagram A visual representation of how `UpdateView` is derived can be seen here:  `UpdateView` > > View for updating an object, with a response rendered by a template. ## Attributes Two attributes are required to get the template to render. We’ve seen `queryset` before and in [CreateView](/cbv-createview/) we saw `fields`. As a brief refresher * fields: specifies what fields from the … | CBV - UpdateView | https://www.ryancheley.com/2019/12/08/cbv-updateview/ |
cbv-view | ryan | technology | `View` is the ancestor of ALL Django CBV. From the great site [Classy Class Based Views](http://ccbv.co.uk), they are described as > > Intentionally simple parent class for all views. Only implements dispatch- > by-method and simple sanity checking. This is no joke. The `View` class has almost nothing to it, but it’s a solid foundation for everything else that will be done. Its implementation has just one attribute `http_method_names` which is a list that allows you to specify what http verbs are allowed. Other than that, there’s really not much to it. You just write a simple method, something like this: def get(self, _): return HttpResponse('My Content') All that gets returned to the page is a simple HTML. You can specify the `content_type` if you just want to return JSON or plain text but defining the content_type like this: def get(self, _): return HttpResponse('My Content', content_type='text plain') You can also make the text that is displayed be based on a variable defined in the class. First, you need to define the variable content = 'This is a {View} template and is not used for much of anything but ' 'allowing extensions of it for other Views' And then you can do something like this: def get(self, _): return HttpResponse(self.content, content_type='text/plain') Also, as mentioned above you can specify the allowable methods via the attribute `http_method_names`. The following HTTP methods are allowed: * get * post * put * patch * delete * head * options * trace By default all are allowed. If we put all of the pieces together we can see that a really simple `View` CBV would look something like this: class myView(View): content = 'This is a {View} template and is not used for much of anything but ' 'allowing extensions of it for other Views' http_method_names = ['get'] def get(self, _): r… | 2019-10-27 | `View` is the ancestor of ALL Django CBV. From the great site [Classy Class Based Views](http://ccbv.co.uk), they are described as > > Intentionally simple parent class for all views. Only implements dispatch- > by-method and simple sanity checking. This is no joke. The `View` class has almost nothing to it, but it’s a … | CBV - View | https://www.ryancheley.com/2019/10/27/cbv-view/ |
class-based-views | ryan | technology | As I’ve written about [previously](/my-first-project-after-completing- the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a pretty good spot (you should totally check it out over at [StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be a good time to learn a bit more about some of the ways that I’m rendering the pages. I’m using Class Based Views (CBV) and I realized that I really didn’t [grok](https://en.wikipedia.org/wiki/Grok) how they worked. I wanted to change that. I’ll be working on a series where I deep dive into the CBV and work them from several angles and try to get them to do all of the things that they are capable of. The first place I’d suggest anyone start to get a good idea of CBV, and the idea of Mixins would be [SpaPas’ GitHub Page](https://spapas.github.io/2018/03/19/comprehensive-django-cbv-guide/) where he does a really good job of covering many pieces of the CBV. It’s a great resource! This is just the intro to this series and my hope is that I’ll publish one of these pieces each week for the next several months as I work my way through all of the various CBV that are available. | 2019-10-27 | As I’ve written about [previously](/my-first-project-after-completing- the-100-days-of-web-in-python.html) I’m working on a Django app. It’s in a pretty good spot (you should totally check it out over at [StadiaTracker.com](https://www.stadiatracker.com)) and I thought now would be a good time to learn a bit more about some of the ways that … | Class Based Views | https://www.ryancheley.com/2019/10/27/class-based-views/ |
contributing-to-django | ryan | technology | I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung around for the sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was particularly interested in working on open tickets related to the ORM. It so happened that [Simon Charette](https://github.com/charettes) was at Django Con and was able to meet with several of us to talk through the inner working of the ORM. With Simon helping to guide us, I took a stab at an open ticket and settled on [10070](https://code.djangoproject.com/ticket/10070). After reviewing it on my own, and then with Simon, it looked like it wasn't really a bug anymore, and so we agreed that I could mark it as [done](https://code.djangoproject.com/ticket/10070#comment:22). Kind of anticlimactic given what I was **hoping** to achieve, but a closed ticket is a closed ticket! And so I [tweeted out my accomplishment](https://twitter.com/ryancheley/status/1583206004744867841) for all the world to see. A few weeks later though, a [comment](https://code.djangoproject.com/ticket/10070#comment:22) was added that it actually was still a bug and it was reopened. I was disappointed ... but I now had a chance to actually fix a real bug! [I started in earnest](https://github.com/ryancheley/public- notes/issues/1#issue-1428819941). A suggestion / pattern for working through learning new things that [Simon Willison](https://simonwillison.net) had mentioned was having a `public-notes` repo on GitHub. He's had some great stuff that he's worked through that you can see [here](https://github.com/simonw/public-notes/issues?q=is%3Aissue). Using this as a starting point, I decided to [walk through what I learned while working on this open ticket](https://github.com/ryancheley/public- notes/issues/1). Over the course of 10 days I had a 38 comment 'conversation with myself' and it was **super** helpful! A couple of key takeaways from working on this issue: * [Carlton Gibson](https://github.com/carltongibson) [said](https://overcast.fm/+QkIrhujD0/21:00) essentially … | 2022-11-12 | I went to [DjangoCon US](https://2022.djangocon.us) a few weeks ago and [hung around for the sprints](https://twitter.com/pauloxnet/status/1583350887375773696). I was particularly interested in working on open tickets related to the ORM. It so happened that [Simon Charette](https://github.com/charettes) was at Django Con and was able to meet with several of us to talk through … | Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug | https://www.ryancheley.com/2022/11/12/contributing-to-django/ |
contributing-to-django-sql-dashboard | ryan | technology | Last Saturday (July 3rd) while on vacation, I dubbed it “Security update Saturday”. I took the opportunity to review all of the GitHub bot alerts about out of date packages, and make the updates I needed to. This included updated `django-sql-dashboard` to [version 1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which I was really excited about doing. It included two things I was eager to see: 1. Implemented a new column cog menu, with options for sorting, counting distinct items and counting by values. [#57](https://github.com/simonw/django-sql-dashboard/issues/57) 2. Admin change list view now only shows dashboards the user has permission to edit. Thanks, [Atul Varma](https://github.com/atverma). [#130](https://github.com/simonw/django-sql-dashboard/issues/130) I made the updates on my site StadiaTracker.com using my normal workflow: 1. Make the change locally on my MacBook Pro 2. Run the tests 3. Push to UAT 4. Push to PROD The next day, on July 4th, I got the following error message via my error logging: Internal Server Error: /dashboard/games-seen-in-person/ ProgrammingError at /dashboard/games-seen-in-person/ could not find array type for data type information_schema.sql_identifier So I copied the [url](https://stadiatracker.com/dashboard/games-seen-in- person/) `/dashboard/games-seen-in-person/` to see if I could replicate the issue as an authenticated user and sure enough, I got a 500 Server error. ## Troubleshooting process The first thing I did was to fire up the local version and check the url there. Oddly enough, it worked without issue. OK … well that’s odd. What are the differences between the local version and the uat / prod version? The local version is running on macOS 10.15.7 while the uat / prod versions are running Ubuntu 18.04. That could be one source of the issue. The local version is running Postgres 13.2 while the uat / prod versions are running Postgres 10.17 OK, two differences. Since the error is `could not… | 2021-07-09 | Last Saturday (July 3rd) while on vacation, I dubbed it “Security update Saturday”. I took the opportunity to review all of the GitHub bot alerts about out of date packages, and make the updates I needed to. This included updated `django-sql-dashboard` to [version 1.0](https://github.com/simonw/django-sql-dashboard/releases/tag/1.0) … which I was really excited … | Contributing to django-sql-dashboard | https://www.ryancheley.com/2021/07/09/contributing-to-django-sql-dashboard/ |
contributing-to-tryceratops | ryan | technology | I read about a project called [Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was [tweeted about by Jeff Triplet](https://twitter.com/webology/status/1414233648534933509) I checked it out and it seemed interesting. I decided to use it on my [simplest Django project](https://doestatisjrhaveanerrortoday.com) just to give it a test drive running this command: tryceratops . and got this result: Done processing! 🦖✨ Processed 16 files Found 0 violations Failed to process 1 files Skipped 2340 files This is nice, but what is the file that failed to process? This left me with two options: 1. Complain that this awesome tool created by someone didn't do the thing I thought it needed to do OR 1. Submit an issue to the project and offer to help. I went with option 2 😀 My initial commit was made in a pretty naive way. It did the job, but not in the best way for maintainability. I had a really great exchange with the maintainer [Guilherme Latrova](https://github.com/guilatrova) about the change that was made and he helped to direct me in a different direction. The biggest thing I learned while working on this project (for Python at least) was the `logging` library. Specifically I learned how to add: * a formatter * a handler * a logger For my change, I added a simple format with a verbose handler in a custom logger. It looked something like this: The formatter: "simple": { "format": "%(message)s", }, The handler: "verbose_output": { "class": "logging.StreamHandler", "level": "DEBUG", "formatter": "simple", "stream": "ext://sys.stdout", }, The logger: "loggers": { "tryceratops": { "level": "INFO", "handlers": [ "verbose_output", ], }, }, This allows the `verbose` flag to output the message to Standard Out and give and `INFO` level of detail. Becau… | 2021-08-07 | I read about a project called [Tryceratops](https://pypi.org/project/tryceratops/) on Twitter when it was [tweeted about by Jeff Triplet](https://twitter.com/webology/status/1414233648534933509) I checked it out and it seemed interesting. I decided to use it on my [simplest Django project](https://doestatisjrhaveanerrortoday.com) just to give it a test drive running this command: tryceratops . and got this result … | Contributing to Tryceratops | https://www.ryancheley.com/2021/08/07/contributing-to-tryceratops/ |
creating-hastags-for-social-media-with-a-drafts-action | ryan | technology | Creating meaningful, long #hastags can be a pain in the butt. There you are, writing up a witty tweet or making that perfect caption for your instagram pic and you realize that you have a fantastic idea for a hash tag that is more of a sentence than a single word. You proceed to write it out and unleash your masterpiece to the world and just as you hit the submit button you notice that you have a typo, or the wrong spelling of a word and #ohcrap you need to delete and retweet! That lead me to write a [Drafts](https://getdrafts.com) Action to take care of that. I’ll leave [others to write about the virtues of Drafts](https://www.macstories.net/reviews/drafts-5-the-macstories-review/), but it’s fantastic. The Action I created has two steps: (1) to run some JavaScript and (2) to copy the contents of the draft to the Clipboard. You can get my action [here](https://actions.getdrafts.com/a/1Uo). Here’s the JavaScript that I used to take a big long sentence and turn it into a social media worthy hashtag var contents = draft.content; var newContents = "#"; editor.setText(newContents+contents.replace(/ /g, "").toLowerCase()); Super simple, but holy crap does it help! | 2019-03-30 | Creating meaningful, long #hastags can be a pain in the butt. There you are, writing up a witty tweet or making that perfect caption for your instagram pic and you realize that you have a fantastic idea for a hash tag that is more of a sentence than a single … | Creating Hastags for Social Media with a Drafts Action | https://www.ryancheley.com/2019/03/30/creating-hastags-for-social-media-with-a-drafts-action/ |
cronjob-finally | ryan | technology | I’ve mentioned before that I have been working on getting the hummingbird video upload automated. Each time I thought I had it, and each time I was wrong. For some reason I could run it from the command line without issue, but when the cronjob would try and run it ... nothing. Turns out, it was running, it just wasn’t doing anything. And that was my fault. The file I had setup in cronjob was called `run_scrip.sh` At first I was confused because the script was suppose to be writing out to a log file all of it’s activities. But it didn’t appear to. Then I noticed that the log.txt file it was writing was in the main `\`` directory. That should have been my first clue. I kept trying to get the script to run, but suddenly, in a blaze of glory, realized that it **was** running, it just wasn’t doing anything. And it wasn’t doing anything for the same reason that the log file was being written to the `\`` directory. All of the paths were relative instead of absolute, so when the script ran the command `./create_mp4.sh` it looks for that script in the home directory, didn’t find it, and moved on. The fix was simple enough, just add absolute paths and we’re golden. That means my `run_script.sh` goes from this: # Create the script that will be run ./create_script.sh echo "Create Shell Script: $(date)" >> log.txt # make the script that was just created executable chmod +x /home/pi/Documents/python_projects/create_mp4.sh # Create the script to create the mp4 file /home/pi/Documents/python_projects/create_mp4.sh echo "Create MP4 Shell Script: $(date)" >> /home/pi/Documents/python_projects/log.txt # upload video to YouTube.com /home/pi/Documents/python_projects/upload.sh echo "Uploaded Video to YouTube.com: $(date)" >> /home/pi/Documents/python_projects/log.txt # Next we remove the video files locally rm /home/pi/Documents/python_projects/*.h264 echo "removed h264 files: $(date)" >> /home/pi/Documents/python_projects/log.txt … | 2018-04-10 | I’ve mentioned before that I have been working on getting the hummingbird video upload automated. Each time I thought I had it, and each time I was wrong. For some reason I could run it from the command line without issue, but when the cronjob would try and run … | Cronjob ... Finally | https://www.ryancheley.com/2018/04/10/cronjob-finally/ |
cronjob-redux | ryan | technology | After **days** of trying to figure this out, I finally got the video to upload via a cronjob. There were 2 issues. ## Issue the first Finally found the issue. [Original script from YouTube developers guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had this: CLIENT_SECRETS_FILE = "client_secrets.json" And then a couple of lines later, this: % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE)) When `crontab` would run the script it would run from a path that wasn’t where the `CLIENT_SECRETS_FILE` file was and so a message would be displayed: WARNING: Please configure OAuth 2.0 To make this sample run you will need to populate the client_secrets.json file found at: %s with information from the Developers Console https://console.developers.google.com/ For more information about the client_secrets.json file format, please visit: https://developers.google.com/api-client-library/python/guide/aaa_client_secrets What I needed to do was to update the `CLIENT_SECRETS_FILE` to be the whole path so that it could always find the file. A simple change: CLIENT_SECRETS_FILE = os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE)) ## Issue the second When the `create_mp4.sh` script would run it was reading all of the `h264` files from the directory where they lived **BUT** they were attempting to output the `mp4` file to `/` which it didn’t have permission to write to. This was failing silently (I’m still not sure how I could have caught the error). Since there was no `mp4` file to upload that script was failing (though it was true that the location of the `CLIENT_SECRETS_FILE` was an issue). What I needed to do was change the `create_mp4.sh` file so that when the MP4Box command output the `mp4` file to the proper directory. The script went from this: (echo '#!/bin/sh'; echo -n "MP4Box"; array=($(ls ~/D… | 2018-04-20 | After **days** of trying to figure this out, I finally got the video to upload via a cronjob. There were 2 issues. ## Issue the first Finally found the issue. [Original script from YouTube developers guide](https://developers.google.com/youtube/v3/guides/uploading_a_video)had this: CLIENT_SECRETS_FILE = "client_secrets.json" And then a couple of lines later, this: % os.path … | Cronjob Redux | https://www.ryancheley.com/2018/04/20/cronjob-redux/ |
daylight-savings-time | ryan | technology | [Dr Drang has posted on Daylight Savings in the past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a recent [post](http://leancrew.com/all-this/2018/03/one-table-following- another/) he critiqued (rightly so) the data presentation by a journalist at the Washington Post on Daylight Savings, and that got me thinking. In the post he generated a chart showing both the total number of daylight hours and the sunrise / sunset times in Chicago. However, initially he didn’t post the code on how he generated it. The next day, in a follow up [post](http://leancrew.com/all-this/2018/03/the-sunrise-plot/), he did and that **really** got my thinking. I wonder what the chart would look like for cities up and down the west coast (say from San Diego, CA to Seattle WA)? Drang’s post had all of the code necessary to generate the graph, but for the data munging, he indicated: > > If I were going to do this sort of thing on a regular basis, I’d write a > script to handle this editing, but for a one-off I just did it “by hand.” Doing it by hand wasn’t going to work for me if I was going to do several cities and so I needed to write a parser for the source of the data ([The US Naval Observatory](http://aa.usno.navy.mil)). The entire script is on my GitHub [sunrise _sunset_](https://github.com/ryancheley/sunrise_sunset) repo. I won’t go into the nitty gritty details, but I will call out a couple of things that I discovered during the development process. Writing a parser is hard. Like _really_ hard. Each time I thought I had it, I didn’t. I was finally able to get the parser to work o cities with `01`, `29`,`30`, or `31` in their longitude / latitude combinations. I generated the same graph as Dr. Drang for the following cities: * Phoenix, AZ * Eugene, OR * Portland * Salem, OR * Seaside, OR * Eureka, CA * Indio, CA * Long Beach, CA * Monterey, CA * San Diego, CA * San Francisco, CA * San Luis Obispo, CA * Ventura, CA * Ferndale, WA * Olympia, WA * Seattle, WA Why did I pick… | 2018-03-26 | [Dr Drang has posted on Daylight Savings in the past](http://www.leancrew.com/all-this/2013/03/why-i-like-dst/), but in a recent [post](http://leancrew.com/all-this/2018/03/one-table-following- another/) he critiqued (rightly so) the data presentation by a journalist at the Washington Post on Daylight Savings, and that got me thinking. In the post he generated a chart showing both the total number of … | Daylight Savings Time | https://www.ryancheley.com/2018/03/26/daylight-savings-time/ |
debugging-setting-up-a-django-project | ryan | technology | Normally when I start a new Django project I’ll use the PyCharm setup wizard, but recently I wanted to try out VS Code for a Django project and was super stumped when I would get a message like this: ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 97, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python@2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 147, in <m… | 2021-06-13 | Normally when I start a new Django project I’ll use the PyCharm setup wizard, but recently I wanted to try out VS Code for a Django project and was super stumped when I would get a message like this: ERROR:root:code for hash md5 was not found. Traceback … | Debugging Setting up a Django Project | https://www.ryancheley.com/2021/06/13/debugging-setting-up-a-django-project/ |
deploying-a-django-site-to-digital-ocean-a-series | ryan | technology | ## Previous Efforts When I first heard of Django I thought it looks like a really interesting, and Pythonic way, to get a website up and running. I spent a whole weekend putting together a site locally and then, using Digital Ocean, decided to push my idea up onto a live site. One problem that I ran into, which EVERY new Django Developer will run into was static files. I couldn’t get static files to work. No matter what I did, they were just … missing. I proceeded to spend the next few weekends trying to figure out why, but alas, I was not very good (or patient) with reading documentation and gave up. Fast forward a few years, and while taking the 100 Days of Code on the Web Python course from Talk Python to Me I was able to follow along on a part of the course that pushed up a Django App to Heroku. I wrote about that effort [here](https://pybit.es/my-first-django-app.html). Needless to say, I was pretty pumped. But, I was wondering, is there a way I can actually get a Django site to work on a non-Heroku (PaaS) type infrastructure. ## Inspiration While going through my Twitter timeline I cam across a retweet from TestDrive.io of [Matt Segal](https://mattsegal.dev/simple-django- deployment.html). He has an **amazing** walk through of deploying a Django site on the hard level (i.e. using Windows). It’s a mix of Blog posts and YouTube Videos and I highly recommend it. There is some NSFW language, BUT if you can get past that (and I can) it’s a great resource. This series is meant to be a written record of what I did to implement these recommendations and suggestions, and then to push myself a bit further to expand the complexity of the app. ## Articles A list of the Articles will go here. For now, here’s a rough outline of the planned posts: * [Setting up the Server (on Digital Ocean)](/setting-up-the-server-on-digital-ocean.html) * [Getting your Domain to point to Digital Ocean Your Server](/getting-your-domain-to-point-to-digital-ocean-your-server.html) * [Preparing the code for deployment to Digit… | 2021-01-24 | ## Previous Efforts When I first heard of Django I thought it looks like a really interesting, and Pythonic way, to get a website up and running. I spent a whole weekend putting together a site locally and then, using Digital Ocean, decided to push my idea up onto a live … | Deploying a Django Site to Digital Ocean - A Series | https://www.ryancheley.com/2021/01/24/deploying-a-django-site-to-digital-ocean-a-series/ |
django-and-legacy-databases | ryan | technology | I work at a place that is heavily investing in the Microsoft Tech Stack. Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL Server ... etc When not at work, I **really** like working with Python and Django. I've never really thought I'd be able to combine the two until I discovered the package mssql-django which was released Feb 18, 2021 in alpha and as a full-fledged version 1 in late July of that same year. Ever since then I've been trying to figure out how to incorporate Django into my work life. I'm going to use this series as an outline of how I'm working through the process of getting Django to be useful at work. The issues I run into, and the solutions I'm (hopefully) able to achieve. I'm also going to use this as a more in depth analysis of an accompanying talk I'm hoping to give at [Django Con 2022](https://2022.djangocon.us) later this year. I'm going to break this down into a several part series that will roughly align with the talk I'm hoping to give. The parts will be: 1. Introduction/Background 2. Overview of the Project 3. Wiring up the Project Models 4. Database Routers 5. Django Admin Customization 6. Admin Documentation 7. Review & Resources My intention is to publish one part every week or so. Sometimes the posts will come fast, and other times not. This will mostly be due to how well I'm doing with writing up my findings and/or getting screenshots that will work. The tool set I'll be using is: * docker * docker-compose * Django * MS SQL * SQLite | 2022-06-15 | I work at a place that is heavily investing in the Microsoft Tech Stack. Windows Servers, c#.Net, Angular, VB.net, Windows Work Stations, Microsoft SQL Server ... etc When not at work, I **really** like working with Python and Django. I've never really thought I'd be able to combine the … | Django and Legacy Databases | https://www.ryancheley.com/2022/06/15/django-and-legacy-databases/ |
django-commons | ryan | technology | First, what are "the commons"? The concept of "the commons" refers to resources that are shared and managed collectively by a community, rather than being owned privately or by the state. This idea has been applied to natural resources like air, water, and grazing land, but it has also expanded to include digital and cultural resources, such as open-source software, knowledge databases, and creative works. As Organization Administrators of Django Commons, we're focusing on sustainability and stewardship as key aspects. Asking for help is hard, but it can be done more easily in a safe environment. As we saw with the [xz utils backdoor](https://en.wikipedia.org/wiki/XZ_Utils_backdoor) attack, maintainer burnout is real. And while there are several arguments about being part of a 'supply chain' if we can, as a community, offer up a place where maintainers can work together for the sustainability and support of their packages, Django community will be better off! From the [README](https://github.com/django- commons/membership/blob/main/README.md) of the membership repo in Django Commons > Django Commons is an organization dedicated to supporting the community's > efforts to maintain packages. It seeks to improve the maintenance experience > for all contributors; reducing the barrier to entry for new contributors and > reducing overhead for existing maintainers. OK, but what does this new organization get me as a maintainer? The (stretch) goal is that we'll be able to provide support to maintainers. Whether that's helping to identify best practices for packages (like requiring tests), or normalize the idea that maintainers can take a step back from their project and know that there will be others to help keep the project going. Being able to accomplish these two goals would be amazing ... but we want to do more! In the long term we're hoping that we're able to do something to help provide compensation to maintainers, but as I said, that's a long term goal. The project was spearheaded by Tim Schilling and he was… | 2024-10-23 | First, what are "the commons"? The concept of "the commons" refers to resources that are shared and managed collectively by a community, rather than being owned privately or by the state. This idea has been applied to natural resources like air, water, and grazing land, but it has also expanded … | Django Commons | https://www.ryancheley.com/2024/10/23/django-commons/ |
django-form-filters | ryan | technology | I’ve been working on a Django Project for a while and one of the apps I have tracks candidates. These candidates have dates of a specific type. The models look like this: ## Candidate class Candidate(models.Model): first_name = models.CharField(max_length=128) last_name = models.CharField(max_length=128) resume = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True) cover_leter = models.FileField(storage=PrivateMediaStorage(), blank=True, null=True) email_address = models.EmailField(blank=True, null=True) linkedin = models.URLField(blank=True, null=True) github = models.URLField(blank=True, null=True) rejected = models.BooleanField() position = models.ForeignKey( "positions.Position", on_delete=models.CASCADE, ) hired = models.BooleanField(default=False) ## CandidateDate class CandidateDate(models.Model): candidate = models.ForeignKey( "Candidate", on_delete=models.CASCADE, ) date_type = models.ForeignKey( "CandidateDateType", on_delete=models.CASCADE, ) candidate_date = models.DateField(blank=True, null=True) candidate_date_note = models.TextField(blank=True, null=True) meeting_link = models.URLField(blank=True, null=True) class Meta: ordering = ["candidate", "-candidate_date"] unique_together = ( "candidate", "date_type", ) ## CandidateDateType class CandidateDateType(models.Model): date_type = models.CharField(max_length=24) description = models.CharField(max_length=255, null=True, blank=True) You’ll see from the CandidateDate model that the fields `candidate` and `date_type` are unique. One problem that I’ve been running into is how to help make that an easier thing to see in the form where the dates are entered. The Dja… | 2021-01-23 | I’ve been working on a Django Project for a while and one of the apps I have tracks candidates. These candidates have dates of a specific type. The models look like this: ## Candidate class Candidate(models.Model): first_name = models.CharField(max_length=128) last_name = models.CharField(max_length=128) resume = models … | Django form filters | https://www.ryancheley.com/2021/01/23/django-form-filters/ |
djangocon-us-2023 | ryan | technology | # My Experience at DjangoCon US 2023 A few days ago I returned from DjangoCon US 2023 and wow, what an amazing time. The only regret I have is that I didn't take very many pictures. This is something I will need to work on for next year. On Monday October 16th I gave a talk [Contributing to Django or how I learned to stop worrying and just try to fix an ORM Bug](https://2023.djangocon.us/talks/contributing-to-django-or-how-i-learned- to-stop-worrying-and-just-try-to-fix-an-orm-bug/). The video will be posted on YouTube in a few weeks. This was the first tech conference I've ever spoken at!!!! I was super nervous leading up to the talk, and even a bit at the start, but once I got going I finally settled in. Here's me on stage taking a selfie with the crowd behind me  Luckily, my talk was one of the first non-Keynote talks so I was able to relax and enjoy the conference while the rest of the time. After the conference talks ended on Wednesday I stuck around for the sprints. This is such a great time to be able to work on open source projects (Django adjacent or not) and just generally hang out with other Djangonauts. I was able to do some work on DjangoPackages with Jeff Triplett, and just generally hang out with some truly amazing people. The Django community is just so great. I've been to many conferences before, but this one is the first where I feel like I belong. I am having some of those post conference blues, but thankfully Kojo Idrissa wrote something about how to [help with that](https://kojoidrissa.com/conferences/community/pycon%20africa/noramgt/2019/08/11/post_conference_depression.html). And taking his advice, it has been helpful to come down from the Conference high. Although the location of DjangoCon US 2024 hasn't been announced yet, I'm making plans to attend. I am also setting myself some goals to have completed by the start of DCUS 2024 * join the fundraising working group * work … | 2023-10-24 | # My Experience at DjangoCon US 2023 A few days ago I returned from DjangoCon US 2023 and wow, what an amazing time. The only regret I have is that I didn't take very many pictures. This is something I will need to work on for next year. On Monday October … | DjangoCon US 2023 | https://www.ryancheley.com/2023/10/24/djangocon-us-2023/ |
djangocon-us-2024-talk | ryan | technology | At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a- talk-at-a-conference/) Well, I spoke again at DjangoCon US this year (2024) and had a similar, but wildly different experience in preparing for my talk. Last year I lamented that I didn't really track my time (which is weird because I track my time for ALL sorts of things!). This year, I did track my time and have a much better sense of how much time I prepared for the talk. Another difference between each year is that in 2023 I gave a 45 minute talk, while this year my talk was 25 minutes. I've heard that you need about 1 hour of prep time for each 1 minute of talk that you're going to give. That means that, on average, for a 25 minute talk I'd need about 25 hours of prep time. [My time tracking shows](https://track.toggl.com/shared- report/6c52f45a0feea26f7c8fd987abf73b2e) that I was a little short of that (19 hours) but my talk ended up being about 20 minutes, so it seems that maybe I was on track for that. This year, as last year, my general prep technique was to: 1. Give the presentation AND record it 2. Watch the recording and make notes about what I needed to change 3. Make the changes I would typically do each step on a different day, though towards the end I would do steps 2 and 3 on the same day, and during the last week I would do all of the steps on the same day. This flow really seems to help me get the most of out practicing my talk and getting a sense of its strengths and weaknesses. One issue that came up a week before I was to leave for DjangoCon US is that my boss said I couldn't have anything directly related to my employer in the presentation. My initial drafts didn't have specifics, but the examples I used were too close for my comfort on that, so I ended up having to refactor that part of my talk. Honestly, I think it came out better because of it. During my practice runs I felt like I was kind of dancing around topics, but… | 2024-10-17 | At DjangoCon US 2023 I gave a talk, and wrote about my experience [preparing for that talk](https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a- talk-at-a-conference/) Well, I spoke again at DjangoCon US this year (2024) and had a similar, but wildly different experience in preparing for my talk. Last year I lamented that I didn't really track my … | DjangoCon US 2024 Talk | https://www.ryancheley.com/2024/10/17/djangocon-us-2024-talk/ |
djhtml-and-justfile | ryan | technology | I had read about a project called djhtml and wanted to use it on one of my projects. The documentation is really good for adding it to precommit-ci, but I wasn't sure what I needed to do to just run it on the command line. It took a bit of googling, but I was finally able to get the right incantation of commands to be able to get it to run on my templates: djhtml -i $(find templates -name '*.html' -print) But of course because I have the memory of a goldfish and this is more than 3 commands to try to remember to string together, instead of telling myself I would remember it, I simply added it to a just file and now have this recipe: # applies djhtml linting to templates djhtml: djhtml -i $(find templates -name '*.html' -print) This means that I can now run `just djhtml` and I can apply djhtml's linting to my templates. Pretty darn cool if you ask me. But then I got to thinking, I can make this a bit more general for 'linting' type activities. I include all of these in my precommit-ci, but I figured, what the heck, might as well have a just recipe for all of them! So I refactored the recipe to be this: # applies linting to project (black, djhtml, flake8) lint: djhtml -i $(find templates -name '*.html' -print) black . flake8 . And now I can run all of these linting style libraries with a single command `just lint` | 2021-08-22 | I had read about a project called djhtml and wanted to use it on one of my projects. The documentation is really good for adding it to precommit-ci, but I wasn't sure what I needed to do to just run it on the command line. It took a bit of … | djhtml and justfile | https://www.ryancheley.com/2021/08/22/djhtml-and-justfile/ |
dropbox-files-word-cloud | ryan | technology | In one of my [previous posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked through how I generated a wordcloud based on my most recent 20 tweets. I though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com) file names as well. just to see if I could. When I first tried to do it (as previously stated, the Twitter Word Cloud post was the first python script I wrote) I ran into some difficulties. I didn't really understand what I was doing (although I still don't **really** understand, I at least have a vague idea of what the heck I'm doing now). The script isn't much different than the [Twitter](https://www.twitter.com) word cloud. The only real differences are: 1. the way in which the `words` variable is being populated 2. the mask that I'm using to display the cloud In order to go get the information from the file system I use the `glob` library: import glob The next lines have not changed import matplotlib.pyplot as plt from wordcloud import WordCloud, STOPWORDS from scipy.misc import imread Instead of writing to a 'tweets' file I'm looping through the files, splitting them at the `/` character and getting the last item (i.e. the file name) and appending it to the list `f`: f = [] for filename in glob.glob('/Users/Ryan/Dropbox/Ryan/**/*', recursive=True): f.append(filename.split('/')[-1]) The rest of the script generates the image and saves it to my Dropbox Account. Again, instead of using a [Twitter](https://www.twitter.com) logo, I'm using a **Cloud** image I found [here](http://www.shapecollage.com/shapes/mask- cloud.png) words = ' ' for line in f: words= words + line stopwords = {'https'} logomask = imread('mask-cloud.png') wordcloud = WordCloud( font_path='/Users/Ryan/Library/Fonts/Inconsolata.otf', stopwords=STOPWORDS.union(stopwords), background_color='white', mask = logomask, … | 2016-11-25 | In one of my [previous posts](https://www.ryancheley.com/blog/2016/11/22/twitter-word-cloud) I walked through how I generated a wordcloud based on my most recent 20 tweets. I though it would be _neat_ to do this for my [Dropbox](https://www.dropbox.com) file names as well. just to see if I could. When I first tried to do it … | Dropbox Files Word Cloud | https://www.ryancheley.com/2016/11/25/dropbox-files-word-cloud/ |
enhancements-using-github-actions-to-deploy | ryan | technology | Integrating a version control system into your development cycle is just kind of one of those things that you do, right? I use GutHub for my version control, and it’s GitHub Actions to help with my deployment process. There are 3 `yaml` files I have to get my local code deployed to my production server: * django.yaml * dev.yaml * prod.yaml Each one serving it’s own purpose ## django.yaml The `django.yaml` file is used to run my tests and other actions on a GitHub runner. It does this in 9 distinct steps and one Postgres service. The steps are: 1. Set up Python 3.8 - setting up Python 3.8 on the docker image provided by GitHub 2. psycopg2 prerequisites - setting up `psycopg2` to use the Postgres service created 3. graphviz prerequisites - setting up the requirements for graphviz which creates an image of the relationships between the various models 4. Install dependencies - installs all of my Python package requirements via pip 5. Run migrations - runs the migrations for the Django App 6. Load Fixtures - loads data into the database 7. Lint - runs `black` on my code 8. Flake8 - runs `flake8` on my code 9. Run Tests - runs all of the tests to ensure they pass name: Django CI on: push: branches-ignore: - main - dev jobs: build: runs-on: ubuntu-18.04 services: postgres: image: postgres:12.2 env: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: github_actions ports: - 5432:5432 # needed because the postgres container does not provide a healthcheck options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v1 - name: Set up Python 3.8 uses: actions/setup-python@v1 with: python-version: 3.8 - uses: actions/cache@v1 … | 2021-03-14 | Integrating a version control system into your development cycle is just kind of one of those things that you do, right? I use GutHub for my version control, and it’s GitHub Actions to help with my deployment process. There are 3 `yaml` files I have to get my local … | Enhancements: Using GitHub Actions to Deploy | https://www.ryancheley.com/2021/03/14/enhancements-using-github-actions-to-deploy/ |
figuring-out-how-drafts-really-works | ryan | technology | On my way back from Arizona a few weeks ago I decided to play around with Drafts a bit. Now I use Drafts every day. When it went to a subscription model more than a year ago it was a no brainer for me. This is a seriously powerful app when you need it. But since my initial workflows and shortcuts I've not really done too [much](/creating-hastags-for-social-media-with-a-drafts-action.html) with it. But after listening to some stuff from [Tim Nahumck](https://nahumck.me) I decided I needed to invest a little time ... and honestly there's no better time than cruising at 25k feet on your way back from Phoenix. Ok, first of all I never really understood workspaces. I had some set up but I didn't get it. That was the first place I started. Each workspace can have its own action and keyboard shortcut thing which I didn't realize. This has so much potential. I can create workspaces for all sorts of things and have the keyboard shortcut things I need when I need them! This alone is mind blowing and I'm disappointed I didn't look into this feature sooner. I have 4 workspaces set up: * OF Templates * O3 * Scrum * post ideas Initially since I didn't really understand the power of the workspace I had them mostly as filtering tools to be used when trying to find a draft. But now with the custom action and keyboards for each workspace I have them set up to filter down to specific tags AND use their own keyboards. The OF Template workspace is used to create OmniFocus projects based on Taskpaper markup. There are a ton of different actions that I took from [Rose Orchard](https://www.relay.fm/people/rose-orchard) (of [Automators](https://automators.fm) fame) that help to either add items with the correct syntax to a Task Paper markdown file OR turn the whole thing into an OmniFocus project. Simply a life saver for when I really know all of the steps that are going to be involved in a project and I want to write them all down! The O3 workspace is used for processing the notes from the one-on-one I have with my team.… | 2019-05-05 | On my way back from Arizona a few weeks ago I decided to play around with Drafts a bit. Now I use Drafts every day. When it went to a subscription model more than a year ago it was a no brainer for me. This is a seriously powerful app … | Figuring out how Drafts REALLY works | https://www.ryancheley.com/2019/05/05/figuring-out-how-drafts-really-works/ |
fixing-a-pycharm-issue-when-updating-python-made-via-homebrew | ryan | technology | I’ve written before about how easy it is to update your version of Python using homebrew. And it totally is easy. The thing that isn’t super clear is that when you do update Python via Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️ I did a bit of searching to find this nice [post on the JetBrains forum](https://intellij-support.jetbrains.com/hc/en- us/community/posts/360000306410-Cannot-use-system-interpreter-in-PyCharm- Pro-2018-1) which indicated > > unfortunately it's a known issue: > <https://youtrack.jetbrains.com/issue/PY-27251> . Please close Pycharm and > remove jdk.table.xml file from \~/Library/Preferences/.PyCharm2018.1/options > directory, then start Pycharm again. OK. I removed the file, but then you have to rebuild the virtual environments because that file is what stores PyCharms knowledge of those virtual environments. In order to get you back to where you need to be, do the following (after removing the `jdk.table.xml` file: 1. pip-freeze > requirements.txt 2. Remove old virtual environment `rm -r venv` 3. Create a new Virtual Environemtn with PyCharm 1. Go to Preferences 2. Project > Project Interpreter 3. Show All 4. Click ‘+’ button 4. `pip install -r requirements.txt` 5. Restart PyCharm 6. You're back This is a giant PITA but thankfully it didn’t take too much to find the issue, nor to fix it. With that being said, I totally shouldn’t have to do this. But I’m writing it down so that once Python 3.8 is available I’ll be able to remember what I did to fix going from Python 3.7.1 to 3.7.5. | 2019-11-14 | I’ve written before about how easy it is to update your version of Python using homebrew. And it totally is easy. The thing that isn’t super clear is that when you do update Python via Homebrew, it seems to break your virtual environments in PyCharm. 🤦♂️ I did a … | Fixing a PyCharm issue when updating Python made via HomeBrew | https://www.ryancheley.com/2019/11/14/fixing-a-pycharm-issue-when-updating-python-made-via-homebrew/ |
fixing-the-python-3-problem-on-my-raspberry-pi | ryan | technology | In my last post I indicated that I may need to > reinstalling everything on the Pi and starting from scratch While speaking about my issues with `pip3` and `python3`. Turns out that the fix was easier than I though. I checked to see what where `pip3` and `python3` where being executed from by running the `which` command. The `which pip3` returned `/usr/local/bin/pip3` while `which python3` returned `/usr/local/bin/python3`. This is exactly what was causing my problem. To verify what version of python was running, I checked `python3 --version` and it returned `3.6.0`. To fix it I just ran these commands to _unlink_ the new, broken versions: `sudo unlink /usr/local/bin/pip3` And `sudo unlink /usr/local/bin/python3` I found this answer on [StackOverflow](https://stackoverflow.com/questions/7679674/changing-default- python-to-another-version "Of Course the answer was on Stack Overflow!") and tweaked it slightly for my needs. Now, when I run `python --version` I get `3.4.2` instead of `3.6.0` Unfortunately I didn’t think to run the `--version` flag on pip before and after the change, and I’m hesitant to do it now as it’s back to working. | 2018-02-13 | In my last post I indicated that I may need to > reinstalling everything on the Pi and starting from scratch While speaking about my issues with `pip3` and `python3`. Turns out that the fix was easier than I though. I checked to see what where `pip3` and `python3` where being … | Fixing the Python 3 Problem on my Raspberry Pi | https://www.ryancheley.com/2018/02/13/fixing-the-python-3-problem-on-my-raspberry-pi/ |
fizz-buzz | ryan | technology | I was listening to the most recent episode of [ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard of before. I decided that I’d give it a shot when I got home using Python and Bash, just to see if I could (I was sure I could, but you know, wanted to make sure). Sure enough, with a bit of googling to remember some syntax of Python, and learn some syntax for bash, I had two stupid little programs for fizz buzz. ## Python def main(): my_number = input("Enter a number: ") if not my_number.isdigit(): return else: my_number = int(my_number) if my_number%3 == 0 and my_number%15!=0: print("fizz") elif my_number%5 == 0 and my_number%15!=0: print("buzz") elif my_number%15 == 0: print("fizz buzz") else: print(my_number) if __name__ == '__main__': main() ## Bash 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | #! /bin/bash echo "Enter a Number: " read my_number re='^[+-]?[0-9]+$' if ! [[ $my_number =~ $re ]] ; then echo "error: Not a number" >&2; exit 1 fi if ! ((my_number % 3)) && ((my_number % 15)); then echo "fizz" elif ! ((my_number % 5)) && ((my_number % 15)); then echo "buzz" elif ! ((my_number % 15)) ; then echo "fizz buzz" else echo my_number fi ---|--- And because if it isn’t in GitHub it didn’t happen, I committed it to my [fizz-buzz repo](https://github.com/ryancheley/fizz-buzz). I figure it might be kind of neat to write it in as many languages as I can, you know … for when I’m bored. | 2018-11-28 | I was listening to the most recent episode of [ATP](http://atp.fm/episodes/302) and John Siracusa mentioned a programmer test called [fizz buzz](http://wiki.c2.com/?FizzBuzzTest) that I hadn’t heard of before. I decided that I’d give it a shot when I got home using Python and Bash, just to see if I could … | Fizz Buzz | https://www.ryancheley.com/2018/11/28/fizz-buzz/ |
fun-with-mcps | ryan | technology | Special Thanks to [Jeff Triplett](https://mastodon.social/@webology) who provided an example that really got me started on better understanding of how this all works. In trying to wrap my head around MCPs over the long Memorial weekend I had a breakthrough. I'm not really sure why this was so hard for me to [grok](https://en.wikipedia.org/wiki/Grok), but now something seems to have clicked. I am working with [Pydantic AI](https://ai.pydantic.dev/) and so I'll be using that as an example, but since MCPs are a standard protocol, these concepts apply broadly across different implementations. ## What is Model Context Protocol (MCP)? Per the [Anthropic announcement](https://www.anthropic.com/news/model-context- protocol) (from November 2024!!!!) > The Model Context Protocol is an open standard that enables developers to > build secure, two-way connections between their data sources and AI-powered > tools. The architecture is straightforward: developers can either expose > their data through MCP servers or build AI applications (MCP clients) that > connect to these servers. What this means is that there is a standard way to extend models like Claude, or OpenAI to include other information. That information can be files on the file system, data in a database, etc. ## (Potential) Real World Example I work for a Healthcare organization in Southern California. One of the biggest challenges with onboarding new hires (and honestly can be a challenge for people that have been with the organization for a long time) is who to reach out to for support on which specific application. Typically a user will send an email to one of the support teams, and the email request can get bounced around for a while until it finally lands on the 'right' support desk. There's the potential to have the applications themselves include who to contact, but some applications are vendor supplied and there isn't always a way to do that. Even if there were, in my experience those are often not noticed by users OR the users will think that the… | 2025-06-02 | Special Thanks to [Jeff Triplett](https://mastodon.social/@webology) who provided an example that really got me started on better understanding of how this all works. In trying to wrap my head around MCPs over the long Memorial weekend I had a breakthrough. I'm not really sure why this was so hard for me … | Fun with MCPs | https://www.ryancheley.com/2025/06/02/fun-with-mcps/ |
gcp-cloud-architect-exam-experience | ryan | technology | [Last October it was announced](https://www.fiercehealthcare.com/health- tech/google-health-notches-another-provider-partner-care-studio) that Desert Oasis Healthcare (the company I work for) signed on to pilot [Google's Care Studio](https://health.google/caregivers/care-studio/). DOHC is the first ambulatory clinic to sign on. I had been in some of the discovery meetings before the announcement and was really excited about the opportunity. So far our use of any Cloud platforms at work has been extremely limited (that is to say, we don't use ANY of the big three cloud solutions for our tech) so this seemed to provide a really good opportunity. As we worked through the project scoping there were conversations about the handoff to DOHC and it occurred to me that I didn't have any knowledge of what GCP offered, what any of it did, or how any of it could work. I've had on my 'To Do' list to learn one of the Big Three Cloud services (AWS, Azure, or GCP) but because we didn't use ANY of them at work I was (a) worried about picking the 'wrong' one and (b) worried that even if I picked one I'd NEVER be able to use it! The partnership with Google changed that. Suddenly which cloud service to learn was apparent AND I'd be able to use whatever I learned for work! Great, now I know which cloud service to start to learn about ... the next question is, "What do I try to learn?". In speaking with some of the folks at Google they recommended one of three Certification options: 1. [Digital Cloud Leader](https://cloud.google.com/certification/cloud-digital-leader) 2. [Cloud Engineer](https://cloud.google.com/certification/cloud-engineer) 3. [Cloud Architect](https://cloud.google.com/certification/cloud-architect) After reviewing each of them and having a good idea of what I **need** to know for work, I opted for the Cloud Architect path. Knowing which certification I was going to work towards, I started to see what learning options were available for me. It just so happens that [Coursera partnered with the California… | 2023-04-01 | [Last October it was announced](https://www.fiercehealthcare.com/health- tech/google-health-notches-another-provider-partner-care-studio) that Desert Oasis Healthcare (the company I work for) signed on to pilot [Google's Care Studio](https://health.google/caregivers/care-studio/). DOHC is the first ambulatory clinic to sign on. I had been in some of the discovery meetings before the announcement and was really excited about the opportunity. So … | GCP Cloud Architect Exam Experience | https://www.ryancheley.com/2023/04/01/gcp-cloud-architect-exam-experience/ |
getting-your-domain-to-point-to-digital-ocean-your-server | ryan | technology | I use Hover for my domain purchases and management. Why? Because they have a clean, easy to use, not-slimy interface, and because I listed to enough Tech Podcasts that I’ve drank the Kool-Aid. When I was trying to get my Hover Domain to point to my Digital Ocean server it seemed much harder to me than it needed to be. Specifically, I couldn’t find any guide on doing it! Many of the tutorials I did find were basically like, it’s all the same. We’ll show you with GoDaddy and then you can figure it out. Yes, I can figure it out, but it wasn’t as easy as it could have been. That’s why I’m writing this up. ## Digital Ocean From Droplet screen click ‘Add a Domain’ <figure class="aligncenter">  </p> Add 2 ‘A’ records (one for www and one without the www)  Make note of the name servers  ## Hover In your account at Hover.com change your Name Servers to Point to Digital Ocean ones from above.  ## Wait DNS … does anyone _really_ know how it works?1 I just know that sometimes when I make a change it’s out there almost immediately for me, and sometimes it takes hours or days. At this point, you’re just going to potentially need to wait. Why? Because DNS that’s why. Ugh! ## Setting up directory structure While we’re waiting for the DNS to propagate, now would be a good time to set up some file structures for when we push our code to the server. For my code deploy I’ll be using a user called `burningfiddle`. We have to do two things here, create the user, and add them to the `www-data` user group on our Linux server. We can run these commands to take care of that: adduser --disabled-password --gecos "" yoursite The first line will add the user with no password and disable them to be able to log in until a password has been set. Sinc… | 2021-02-07 | I use Hover for my domain purchases and management. Why? Because they have a clean, easy to use, not-slimy interface, and because I listed to enough Tech Podcasts that I’ve drank the Kool-Aid. When I was trying to get my Hover Domain to point to my Digital Ocean server … | Getting your Domain to point to Digital Ocean Your Server | https://www.ryancheley.com/2021/02/07/getting-your-domain-to-point-to-digital-ocean-your-server/ |
home-end-pgup-pgdn-bbedit-preferences | ryan | technology | As I've been writing up my posts for the last couple of days I've been using the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems) [Text Editor](https://en.wikipedia.org/wiki/Text_editor) [BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the things that has been tripping me up though are my 'Windows' tendencies on the keyboard. Specifically, my muscle memory of the use and behavior of the `Home`, `End`, `PgUp` and `PgDn` keys. The default behavior for these keys in BBEdit are not what I needed (nor wanted). I lived with it for a couple of days figuring I'd get used to it and that would be that. While driving home from work today I was listening to [ATP Episode 196](https://atp.fm/episodes/196) and their Post-Show discussion of the recent departure of [Sal Soghoian](https://en.wikipedia.org/wiki/Sal_Soghoian) who was the Project Manager for the macOS automation. I'm not sure why, but suddenly it clicked with me that I could probably change the behavior of the keys through the Preferences for the Keyboard (either system wide, or just in the Application). When I got home I fired up [BBEdit](http://www.barebones.com/products/bbedit/index.html) and jumped into the preferences and saw this:  I made a couple of changes, and now the keys that I use to navigate through the text editor are now how I want them to be:  Nothing too fancy, or anything, but goodness, does it feel right to have the keys work the way I need them to. | 2016-11-22 | As I've been writing up my posts for the last couple of days I've been using the amazing [macOS](https://en.wikipedia.org/wiki/Macintosh_operating_systems) [Text Editor](https://en.wikipedia.org/wiki/Text_editor) [BBEdit](http://www.barebones.com/products/bbedit/index.html). One of the things that has been tripping me up though are my 'Windows' tendencies on the keyboard. Specifically, my muscle memory of the use and behavior of … | Home, End, PgUp, PgDn ... BBEdit Preferences | https://www.ryancheley.com/2016/11/22/home-end-pgup-pgdn-bbedit-preferences/ |
how-does-my-django-site-connect-to-the-internet-anyway | ryan | technology | I created a Django site to troll my cousin Barry who is a big [San Diego Padres](https://www.mlb.com/padres "San Diego Padres") fan. Their Shortstop is a guy called [Fernando Tatis Jr.](https://www.baseball- reference.com/players/t/tatisfe02.shtml "Fernando “Error Maker” Tatis Jr.") and he’s really good. Like **really** good. He’s also young, and arrogant, and is everything an old dude like me doesn’t like about the ‘new generation’ of ball players that are changing the way the game is played. In all honesty though, it’s fun to watch him play (anyone but the Dodgers). The thing about him though, is that while he’s really good at the plate, he’s less good at playing defense. He currently leads the league in errors. Not just for all shortstops, but for ALL players! Anyway, back to the point. I made this Django site call [Does Tatis Jr Have an Error Today?](https://www.doestatisjrhaveanerrortoday.com "Not Yet")It is a simple site that only does one thing ... tells you if Tatis Jr has made an error today. If he hasn’t, then it says `No`, and if he has, then it says `Yes`. It’s a dumb site that doesn’t do anything else. At all. But, what it did do was lead me down a path to answer the question, “How does my site connect to the internet anyway?” Seems like a simple enough question to answer, and it is, but it wasn’t really what I thought when I started. ## How it works I use a MacBook Pro to work on the code. I then deploy it to a Digital Ocean server using GitHub Actions. But they say, a picture is worth a thousand words, so here's a chart of the workflow:  This shows the development cycle, but that doesn’t answer the question, how does the site connect to the internet! How is it that when I go to the site, I see anything? I thought I understood it, and when I tried to actually draw it out, turns out I didn't! After a bit of Googling, I found [this](https://serverfault.com/a/331263 "How does Gunicorn interact … | 2021-05-31 | I created a Django site to troll my cousin Barry who is a big [San Diego Padres](https://www.mlb.com/padres "San Diego Padres") fan. Their Shortstop is a guy called [Fernando Tatis Jr.](https://www.baseball- reference.com/players/t/tatisfe02.shtml "Fernando “Error Maker” Tatis Jr.") and he’s really good. Like **really** good. He’s also young, and arrogant, and is everything an old dude like me doesn … | How does my Django site connect to the internet anyway? | https://www.ryancheley.com/2021/05/31/how-does-my-django-site-connect-to-the-internet-anyway/ |
hummingbird-video-capture | ryan | technology | I [previously wrote](/using-mp4box-to-concatenate-many-h264-files-into-one- mp4-file-revisited.html) about how I placed my Raspberry Pi above my hummingbird feeder and added a camera to it to capture video. Well, the day has finally come where I’ve been able to put my video of it up on [YouTube](https://youtu.be/_oNlhrZJ-0Y)! It’s totally silly, but it was satisfying getting it out there for everyone to watch and see. ## Hummingbird Video Capture: Addendum The code used to generate the the `mp4` file haven’t changed (really). I did do a couple of things to make it a little easier though. I have 2 scripts that generate the file and then copy it from the pi to my MacBook Pro and the clean up: Script 1 is called `create_script.sh` and looks like this: (echo '#!/bin/sh'; echo -n "MP4Box"; array=($(ls *.h264)); for index in ${!array[@]}; do if [ "$index" -eq 0 ]; then echo -n " -add ${array[index]}"; else echo -n " -cat ${array[index]}"; fi; done; echo -n " hummingbird.mp4") > create_mp4.sh | chmod +x create_mp4.sh This creates a script called `create_mp4.sh` and makes it executable. This script is called by another script called `run_script.sh` and looks like this: ./create_script.sh ./create_mp4.sh scp hummingbird.mp4 ryan@192.168.1.209:/Users/ryan/Desktop/ # Next we remove the video files locally rm *.h264 rm *.mp4 It runs the `create_script.sh` which creates `create_mpr.sh` and then runs it. Then I use the `scp` command to copy the `mp4` file that was just created over to my Mac Book Pro. As a last bit of housekeeping I clean up the video files. I’ve added this `run_script.sh` to a cron job that is scheduled to run every night at midnight. We’ll see how well it runs tomorrow night! | 2018-04-05 | I [previously wrote](/using-mp4box-to-concatenate-many-h264-files-into-one- mp4-file-revisited.html) about how I placed my Raspberry Pi above my hummingbird feeder and added a camera to it to capture video. Well, the day has finally come where I’ve been able to put my video of it up on [YouTube](https://youtu.be/_oNlhrZJ-0Y)! It’s totally silly, but it was … | Hummingbird Video Capture | https://www.ryancheley.com/2018/04/05/hummingbird-video-capture/ |
i-made-a-slackbot | ryan | technology | ## Building my first Slack Bot I had added a project to my OmniFocus database in November of 2021 which was, "Build a Slackbot" after watching a [Video](https://www.youtube.com/watch?v=2X8SrKL7E9A) by [Mason Egger](https://twitter.com/masonegger). I had hoped that I would be able to spend some time on it over the holidays, but I was never able to really find the time. A few weeks ago, [Bob Belderbos](https://twitter.com/bbelderbos) tweeted: > If you were to build a Slack bot, what would it do? > > — Bob Belderbos (@bbelderbos) [February 2, > 2022](https://twitter.com/bbelderbos/status/1488806429251313666?ref_src=twsrc%5Etfw) And I responded > I work in US Healthcare where there are a lot of Acronyms (many of which are > used in tech but have different meaning), so my slack bot would allow a user > to enter an acronym and return what it means, i.e., CMS = Centers for > Medicare and Medicaid Services. > > — The B Is Silent (@ryancheley) [February 2, > 2022](https://twitter.com/ryancheley/status/1488879253911261184?ref_src=twsrc%5Etfw) I didn't _really_ have anymore time now than I did over the holiday, but Bob asking and me answering pushed me to _actually_ write the darned thing. I think one of the problems I encountered was what backend / tech stack to use. I'm familiar with Django, but going from 0 to something in production has a few steps and although I know how to do them ... I just felt ~overwhelmed~ by the prospect. I felt equally ~overwhelmed~ by the prospect of trying FastAPI to create the API or Flask, because I am not as familiar with their deployment story. Another thing that was different now than before was that I had worked on a [Django Cookie Cutter](https://github.com/ryancheley/django-cookiecutter) to use and that was 'good enough' to try it out. So I did. I ran into a few [problems](https://github.com/ryancheley/django- cookiecutter/compare/de07ba6..cd7c272) while working with my Django Cookie Cutter but I fixed them and then dove head first into writing the Slack Bot ## The model Th… | 2022-02-19 | ## Building my first Slack Bot I had added a project to my OmniFocus database in November of 2021 which was, "Build a Slackbot" after watching a [Video](https://www.youtube.com/watch?v=2X8SrKL7E9A) by [Mason Egger](https://twitter.com/masonegger). I had hoped that I would be able to spend some time on it over the holidays, but I was … | I made a Slackbot! | https://www.ryancheley.com/2022/02/19/i-made-a-slackbot/ |
inserting-a-url-in-markdown-in-vs-code | ryan | technology | Since I [switched my blog to pelican](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/) last summer I've been using [VS Code](https://code.visualstudio.com) as my writing app. And it's **really** good for writing, note just code but prose as well. The one problem I've had is there's no keyboard shortcut for links when writing in markdown ... at least not a default / native keyboard shortcut. In other (macOS) writing apps you just select the text and press ⌘+k and boop! There's a markdown link set up for you. But not so much in VS Code. I finally got to the point where that was one thing that may have been keeping me from writing because of how much 'friction' it caused! So, I decided to figure out how to fix that. I did have to do a bit of googling and eventually found [this](https://stackoverflow.com/a/70601782) StackOverflow answer Essentially the answer is 1. Open the Preferences Page: ⌘+Shift+P 2. Select `Preferences: Open Keyboard Shortcuts (JSON)` 3. Update the `keybindings.json` file to include a new key The new key looks like this: { "key": "cmd+k", "command": "editor.action.insertSnippet", "args": { "snippet": "[${TM_SELECTED_TEXT}]($0)" }, "when": "editorHasSelection && editorLangId == markdown " } Honestly, it's _little_ things like this that can make life so much easier and more fun. Now I just need to remember to do this on my work computer 😀 | 2022-04-08 | Since I [switched my blog to pelican](https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from- wordpress/) last summer I've been using [VS Code](https://code.visualstudio.com) as my writing app. And it's **really** good for writing, note just code but prose as well. The one problem I've had is there's no keyboard shortcut for links when writing in markdown ... at least not … | Inserting a URL in Markdown in VS Code | https://www.ryancheley.com/2022/04/08/inserting-a-url-in-markdown-in-vs-code/ |
installing-fonts-in-ulysses | ryan | technology | One of the people I follow online, [Federico Viticci](http://ticci.org), is an iOS power user, although I would argue that phrase doesn’t really do him justice. He can make the iPad do things that many people can’t get Macs to do. Recently he [posted](https://www.macstories.net/linked/in-search-of-the- perfect-writing-font/) an article on a new font he is using in Ulysses and I wanted to give it a try. The article says: > > Installing custsom fonts in Ulysses for iOS is easy: [go to the GitHub > page](https://github.com/iaolo/iA-Fonts/tree/master/iA%20Writer%20Duospace > "iA Writer Duospace"), download each one, and open them in Ulysses (with the > share sheet) to install them. Simple enough, but it wasn’t clicking for me. I kept thinking I had done _something_ wrong. So I thought I’d write up the steps I used so I wouldn’t forget the next time I need to add a new font. ## Downloading the Font 1. Download the font to somewhere you can get it. I chose to save it to iCloud and use the `Files` app 2. Hit Select in the `Files` app 3. Click `Share` 4. Select `Open in Ulysses` 5. The custom font is now installed and being used. ## Checking the Font: 1. Click the ‘A’ in the writing screen (this is the font selector) located in the upper right hand corner of Ulysses  1. Notice that the Current font indicates it’s a custom font (in This case iA Writer Duospace:  Not that hard, but there’s no feedback telling you that you have been successful so I wasn’t sure if I had done it or not. | 2017-12-12 | One of the people I follow online, [Federico Viticci](http://ticci.org), is an iOS power user, although I would argue that phrase doesn’t really do him justice. He can make the iPad do things that many people can’t get Macs to do. Recently he [posted](https://www.macstories.net/linked/in-search-of-the- perfect-writing-font/) an article on a new … | Installing fonts in Ulysses | https://www.ryancheley.com/2017/12/12/installing-fonts-in-ulysses/ |
installing-the-osmnx-package-for-python | ryan | technology | I read about a cool gis package for Python and decided I wanted to play around with it. This post isn't about any of the things I've learned about the package, it's so I can remember how I installed it so I can do it again if I need to. The package is described by it's author in his [post](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) To install `osmnx` I needed to do the following: 1. Install [Home Brew](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) if it's not already installed by running this command (as an administrator) in the `terminal`: > > `/usr/bin/ruby -e "$(curl -fsSL > https://raw.githubusercontent.com/Homebrew/install/master/install)"` 2. Use [Home Brew to install the `spatialindex` dependency](https://github.com/kjordahl/SciPy-Tutorial-2015/issues/1). From the `terminal` (again as an administrator): > > `brew install spatialindex` 3. In python run pip to install `rtree`: > > `pip install rtree` 4. In python run pip to install `osmnx` > > `pip install osmnx` I did this on my 2014 iMac but didn't document the process. This lead to a problem when I tried to run some code on my 2012 MacBook Pro. Step 3 may not be required, but I'm **not** sure and I don't want to not have it written down and then wonder why I can't get `osmnx` to install in 3 years when I try again! Remember, you're not going to remember what you did, so you need to write it down! | 2016-11-24 | I read about a cool gis package for Python and decided I wanted to play around with it. This post isn't about any of the things I've learned about the package, it's so I can remember how I installed it so I can do it again if I need to … | Installing the osmnx package for Python | https://www.ryancheley.com/2016/11/24/installing-the-osmnx-package-for-python/ |
ipad-versus-macbook-pro | ryan | technology | May people ask the question ... iPad Pro or MacBook Pro. I decided to really think about this question and see, what is it that I do with each device. Initially I thought of each device as being its own ‘thing’. I did these things on my iPad Pro and those things on my MacBook Pro. But when I really sat down and thought about it, it turns out that there are things I do exclusively on my iPad Pro, and other things that I do exclusively on my MacBook Pro ... but there are also many things that I do on both. ## iPad Pro There are apps which only run on iOS. Drafts is a perfect example. It’s my note taking app of choice. Using my iPhone in conjunction with my iPad makes Drafts one of the most powerful apps I use in the iOS ecosystem. During meetings I can quickly jot down things that I need to know using my iPhone and no one notices or cares. Later, I can use my iPad Pro to process these notes and make sure that everything gets taken care of. I can also use Drafts as a powerful automation tool to get ideas into OmniFocus (my To Do App of Choice) easily and without any fuss. I also use my iPad Pro to process the expenses my family incurs. We use Siri Shortcuts to take a picture of a receipt which is then saved in a folder in Dropbox. I monitor these images and match them up against expenses (or income) in Mint and categorize the expenses. This workflow helps to keep me (and my family) in the know about how (and more importantly where) we’re spending our money. Mint is available as a web page, and I’ve tried to use macOS and this workflow, but it simply didn’t work for me. Using OmniFocus on the iPad is a dream. I am easily able to process my inbox, perform my weekly review and quickly add new items to do inbox. The ability to drag and drop with with either Apple Pencil or my finger makes it so easy to move tasks around. The other (obvious) use case for my iPad Pro over my MacBook Pro is media consumption. Everyone says you can’t get real work done on an iPad and they point to how easy it is to consume media … | 2018-12-01 | May people ask the question ... iPad Pro or MacBook Pro. I decided to really think about this question and see, what is it that I do with each device. Initially I thought of each device as being its own ‘thing’. I did these things on my iPad Pro and those … | iPad versus MacBook Pro | https://www.ryancheley.com/2018/12/01/ipad-versus-macbook-pro/ |
issues-with-psycopg2-again | ryan | technology | In a [previous post](/mischief-managed/) I had written about an issue I’d had with upgrading, installing, or just generally maintaining the python package `psycopg2` ([link](https://www.psycopg.org)). I ran into that issue again today, and thought to myself, “Hey, I’ve had this problem before AND wrote something up about it. Let me go see what I did last time.” I searched my site for `psycopg2` and tried the solution, but I got the same [forking](https://thegoodplace.fandom.com/wiki/Censored_Curse_Words) error. OK … let’s turn to the experts on the internet. After a while I came across [this](https://stackoverflow.com/questions/26288042/error-installing- psycopg2-library-not-found-for-lssl) article on StackOverflow but this [specific answer](https://stackoverflow.com/a/56146592) helped get me up and running. A side effect of all of this is that I upgraded from Python 3.7.5 to Python 3.8.1. I also updated all of my brew packages, and basically did a lot of cleaning up that I had neglected. Not how I expected to spend my morning, but productive nonetheless. | 2020-05-03 | In a [previous post](/mischief-managed/) I had written about an issue I’d had with upgrading, installing, or just generally maintaining the python package `psycopg2` ([link](https://www.psycopg.org)). I ran into that issue again today, and thought to myself, “Hey, I’ve had this problem before AND wrote something up about it. Let … | Issues with psycopg2 … again | https://www.ryancheley.com/2020/05/03/issues-with-psycopg2-again/ |
itfdb | ryan | technology | My wife and I **love** baseball season. Specifically we love the [Dodgers](https://www.mlb.com/dodgers "Go Dodgers!!!") and we can’t wait for Spring Training to begin. In fact, today pitchers and catchers report! I’ve wanted to do something with the Raspberry Pi Sense Hat that I got (since I got it) but I’ve struggled to find anything useful. And then I remembered baseball season and I thought, ‘Hey, what if I wrote something to have the Sense Hat say “#ITFDB” starting 10 minutes before a Dodgers game started?’ And so I did! The script itself is relatively straight forward. It reads a csv file and checks to see if the current time in California is within 10 minutes of start time of the game. If it is, then it will send a `show_message` command to the Sense Hat. I also wrote a cron job to run the script every minute so that I get a beautiful scrolling bit of text every minute before the Dodgers start! The code can be found on my [GitHub](https://github.com/ryancheley/itfdb "Git Hub") page in the itfdb repository. There are 3 files: 1. `Program.py` which does the actual running of the script 2. `data_types.py` which defines a class used in `Program.py` 3. `schedule.csv` which is the schedule of the games for 2018 as a csv file. I ran into a couple of issues along the way. First, my development environment on my Mac Book Pro was Python 3.6.4 while the Production Environment on the Raspberry Pi was 3.4. This made it so that the code about time ran locally but not on the server 🤦♂️. It took some playing with the code, but I was finally able to go from this (which worked on 3.6 but not on 3.4): now = utc_now.astimezone(pytz.timezone("America/Los_Angeles")) game_date_time = game_date_time.astimezone(pytz.timezone("America/Los_Angeles")) To this which worked on both: local_tz = pytz.timezone('America/Los_Angeles') now = utc_now.astimezone(local_tz) game_date_time = local_tz.localize(game_date_time) For both, the `game_date_time` variable setting was don… | 2018-02-13 | My wife and I **love** baseball season. Specifically we love the [Dodgers](https://www.mlb.com/dodgers "Go Dodgers!!!") and we can’t wait for Spring Training to begin. In fact, today pitchers and catchers report! I’ve wanted to do something with the Raspberry Pi Sense Hat that I got (since I got it) but I … | ITFDB!!! | https://www.ryancheley.com/2018/02/13/itfdb/ |
itfdb-demo | ryan | technology | Last Wednesday if you would have asked what I had planned for Easter I would have said something like, “Going to hide some eggs for my daughter even though she knows the Easter bunny isn’t real.” Then suddenly my wife and I were planning on entertaining for 11 family members. My how things change! Since I was going to have family over, some of whom are [Giants](https://www.mlb.com/giants) fans, I wanted to show them the [ITFDB program I have set up with my Pi](http://www.ryancheley.com/index.php/2018/02/13/itfdb/). The only problem is that they would be over at 10am and leave by 2pm while the game doesn’t start until 5:37pm (Thanks [ESPN](https://www.espn.com)). To help demonstrate the script I wrote a _demo_ script to display a message on the Pi and play the Vin Scully mp3. The Code was simple enough: from sense_hat import SenseHat import os def main(): sense = SenseHat() message = '#ITFDB!!! The Dodgers will be playing San Francisco at 5:37pm tonight!' sense.show_message(message, scroll_speed=0.05) os.system("omxplayer -b /home/pi/Documents/python_projects/itfdb/dodger_baseball.mp3") if __name__ == '__main__': main() But then the question becomes, how can I easily launch the script without [futzing](https://en.wiktionary.org/wiki/futz) with my laptop? I knew that I could run a shell script for the [Workflow app](https://workflow.is) on my iPhone with a single action, so I wrote a simple shell script python3 ~/Documents/python_projects/itfdb/demo.py Which was called `itfdb_demo.sh` And made it executable chmod u+x itfdb_demo.sh Finally, I created a WorkFlow which has only one action `Run Script over SSH` and added it to my home screen so that with a simple tap I could demo the results. The WorkFlow looks like this:  Nothing too fancy, but I was able to reliably and easily demonstrate what I had done. And it was… | 2018-04-01 | Last Wednesday if you would have asked what I had planned for Easter I would have said something like, “Going to hide some eggs for my daughter even though she knows the Easter bunny isn’t real.” Then suddenly my wife and I were planning on entertaining for 11 family … | ITFDB Demo | https://www.ryancheley.com/2018/04/01/itfdb-demo/ |
itfkh | ryan | technology | It’s time for Kings Hockey! A couple of years ago Emily and I I decided to be Hockey fans. This hasn’t really meant anything except that we picked a team (the Kings) and ‘rooted’ for them (i.e. talked sh*t* to our hockey friends), looked up their position in the standings, and basically said, “Umm ... yeah, we’re hockey fans.” When the 2018 baseball season ended, and with the lack of interest in the NFL (or the NBA) Emily and I decided to actually focus on the NHL. Step 1 in becoming a Kings fan is watching the games. To that end we got a subscription to NHL Center Ice and have committed to watching the games. Step 2 is getting notified of when the games are on. To accomplish this I added the games to our family calendar, and decided to use what I learned writing my [ITFDB](/itfdb/) program and write one for the Kings. For the Dodgers I had to create a CSV file and read it’s contents. Fortunately, the NHL as a sweet API that I could use. This also gave me an opportunity to use an API for the first time! The API is relatively straight forward and has some really good documentation so using it wasn’t too challenging. import requests from sense_hat import SenseHat from datetime import datetime import pytz from dateutil.relativedelta import relativedelta def main(team_id): sense = SenseHat() local_tz = pytz.timezone('America/Los_Angeles') utc_now = pytz.utc.localize(datetime.utcnow()) now = utc_now.astimezone(local_tz) url = 'https://statsapi.web.nhl.com/api/v1/schedule?teamId={}'.format(team_id) r = requests.get(url) total_games = r.json().get('totalGames') for i in range(total_games): game_time = (r.json().get('dates')[i].get('games')[0].get('gameDate')) away_team = (r.json().get('dates')[i].get('games')[0].get('teams').get('away').get('team').get('name')) home_team = (r.json().get('dates')[i].get('games')[0].get('teams').get('home').get('team'… | 2018-11-09 | It’s time for Kings Hockey! A couple of years ago Emily and I I decided to be Hockey fans. This hasn’t really meant anything except that we picked a team (the Kings) and ‘rooted’ for them (i.e. talked sh*t* to our hockey friends), looked up their … | ITFKH!!! | https://www.ryancheley.com/2018/11/09/itfkh/ |
keeping-python-up-to-date-on-macos | ryan | technology | Sometimes the internet is a horrible, awful, ugly thing. And then other times, it’s exactly what you need. I have 2 Raspberry Pi each with different versions of Python. One running python 3.4.2 and the other running Python 3.5.3. I have previously tried to upgrade the version of the Pi running 3.5.3 to a more recent version (in this case 3.6.1) and read 10s of articles on how to do it. It did not go well. Parts seemed to have worked, while others didn’t. I have 3.6.1 installed, but in order to run it I have to issue the command `python3.6` which is _fine_ but not really what I was looking for. For whatever reason, although I do nearly all of my Python development on my Mac, it hadn’t occurred to me to upgrade Python there until last night. With a simple Google search the first result came to Stackoverflow (what else?) and [this](https://apple.stackexchange.com/questions/201612/keeping- python-3-up-to-date-on-a-mac) answer. brew update brew upgrade python3 Sometimes things on a Mac do ‘just work’. This was one of those times. I’m now running Python 3.7.1 and I’ll I needed to do was a simple command in the terminal. God bless the internet. | 2018-12-22 | Sometimes the internet is a horrible, awful, ugly thing. And then other times, it’s exactly what you need. I have 2 Raspberry Pi each with different versions of Python. One running python 3.4.2 and the other running Python 3.5.3. I have previously tried to upgrade … | Keeping Python up to date on macOS | https://www.ryancheley.com/2018/12/22/keeping-python-up-to-date-on-macos/ |
logging-in-a-django-app | ryan | technology | Per the [Django Documentation](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting- ADMINS) you can set up > A list of all the people who get code error notifications. When DEBUG=False > and AdminEmailHandler is configured in LOGGING (done by default), Django > emails these people the details of exceptions raised in the request/response > cycle. In order to set this up you need to include in your `settings.py` file something like: ADMINS = [ ('John', 'john@example.com'), ('Mary', 'mary@example.com') ] The difficulties I always ran into were: 1. How to set up the AdminEmailHandler 2. How to set up a way to actually email from the Django Server Again, per the [Django Documentation](https://docs.djangoproject.com/en/3.1/topics/logging/#django.utils.log.AdminEmailHandler "AdminEmailHandler"): > Django provides one log handler in addition to those provided by the Python > logging module Reading through the documentation didn’t **really** help me all that much. The docs show the following example: 'handlers': { 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, That’s great, but there’s not a direct link (that I could find) to the example of how to configure the logging in that section. It is instead at the **VERY** bottom of the documentation page in the Contents section in the [Configured logging > Examples](https://docs.djangoproject.com/en/3.1/topics/logging/#configuring- logging) section ... and you _really_ need to know that you have to look for it! The important thing to do is to include the above in the appropriate `LOGGING` setting, like this: LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True… | 2020-10-21 | Per the [Django Documentation](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting- ADMINS) you can set up > A list of all the people who get code error notifications. When DEBUG=False > and AdminEmailHandler is configured in LOGGING (done by default), Django > emails these people the details of exceptions raised in the request/response > cycle. In order to set this … | Logging in a Django App | https://www.ryancheley.com/2020/10/21/logging-in-a-django-app/ |
logging-part-1 | ryan | technology | # Logging Last year I worked on an update to the package [tryceratops](https://pypi.org/project/tryceratops/) with [Gui Latrova](https://twitter.com/guilatrova) to include a verbose flag for logging. Honestly, Gui was a huge help and I wrote about my experience [here](\[link\]\(https://www.ryancheley.com/2021/08/07/contributing-to- tryceratops/\)) but I didn't really understand why what I did worked. Recently I decided that I wanted to better understand logging so I dove into some posts from Gui, and sat down and read the documentation on the logging from the standard library. My goal with this was to (1) be able to use logging in my projects, and (2) write something that may be able to help others. Full disclosure, Gui has a **really** [good article explaining logging](https://guicommits.com/how-to-log-in-python-like-a-pro/) and I think everyone should read it. My notes below are a synthesis of his article, my understanding of the [documentation from the standard library](https://docs.python.org/3/library/logging.html), and the [Python HowTo](https://docs.python.org/3/howto/logging.html) written in a way to answer the [Five W questions](https://www.education.com/game/five-ws-song/) I was taught in grade school. ## The Five W's **Who are the generated logs for?** Anyone trying to troubleshoot an issue, or monitor the history of actions that have been logged in an application. **What is written to the log?** The [formatter](https://docs.python.org/3/library/logging.html#formatter- objects) determines what to display or store. **When is data written to the log?** The [logging level](https://docs.python.org/3/library/logging.html#logging- levels) determines when to log the issue. **Where is the log data sent to?** The [handler](https://docs.python.org/3/library/logging.html#handler-objects) determines where to send the log data whether that's a file, or stdout. **Why would I want to use logging?** To keep a history of actions taken during your code. **How is the data sent to the log?** The [loggers… | 2022-03-30 | # Logging Last year I worked on an update to the package [tryceratops](https://pypi.org/project/tryceratops/) with [Gui Latrova](https://twitter.com/guilatrova) to include a verbose flag for logging. Honestly, Gui was a huge help and I wrote about my experience [here](\[link\]\(https://www.ryancheley.com/2021/08/07/contributing-to- tryceratops/\)) but I didn't really understand why what I did worked. Recently I decided that I … | Logging Part 1 | https://www.ryancheley.com/2022/03/30/logging-part-1/ |
logging-part-2 | ryan | technology | In my [previous post](https://www.ryancheley.com/2022/03/30/logging-part-1/) I wrote about inline logging, that is, using logging in the code without a configuration file of some kind. In this post I'm going to go over setting up a configuration file to support the various different needs you may have for logging. Previously I mentioned this scenario: > Perhaps the DevOps team wants robust logging messages on anything `ERROR` > and above, but the application team wants to have `INFO` and above in a > rotating file name schema, while the QA team needs to have the `DEBUG` and > up output to standard out. Before we get into how we may implement something like what's above, let's review the parts of the Logger which are: * [formatters](https://docs.python.org/3/library/logging.html#formatter-objects) * [handlers](https://docs.python.org/3/library/logging.html#handler-objects) * [loggers](https://docs.python.org/3/library/logging.html#logger-objects) ## Formatters In a logging configuration file you can have multiple formatters specified. The above example doesn't state WHAT each team need, so let's define it here: * DevOps: They need to know **when** the error occurred, what the **level** was, and what **module** the error came from * Application Team: They need to know **when** the error occurred, the **level** , what **module** and **line** * The QA Team: They need to know when the error occurred, the **level** , what **module** and **line** , and they need a **stack trace** For the Devops Team we can define a formatter as such1: '%(asctime)s - %(levelname)s - %(module)s' The Application team would have a formatter like this: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s' while the QA team would have one like this: '%(asctime)s - %(levelname)s - %(module)s - %(lineno)s' ## Handlers The Handler controls _where_ the data from the log is going to be sent. There are several kinds of handlers, but based on our requirements abov… | 2022-04-07 | In my [previous post](https://www.ryancheley.com/2022/03/30/logging-part-1/) I wrote about inline logging, that is, using logging in the code without a configuration file of some kind. In this post I'm going to go over setting up a configuration file to support the various different needs you may have for logging. Previously I mentioned … | Logging Part 2 | https://www.ryancheley.com/2022/04/07/logging-part-2/ |
making-background-images | ryan | technology | I'm a big fan of [podcasts](http://www.ryancheley.com/podcasts-i-like/). I've been listening to them for 4 or 5 years now. One of my favorite Podcast Networks, [Relay](http://www.relay.fm) just had their second anniversary. They offer memberships and after listening to hours and hours of _All The Great Shows_ I decided that I needed to become a [member](https://www.relay.fm/membership). One of the awesome perks of [Relay](http://www.relay.fm) membership is a set of **Amazing** background images. This is fortuitous as I've been looking for some good backgrounds for my iMac, and so it seemed like a perfect fit. On my iMac I have several `spaces` configured. One for `Writing`, one for `Podcast` and one for everything else. I wanted to take the backgrounds from Relay and have them on the `Writing` space and the `Podcasting` space, but I also wanted to be able to distinguish between them. One thing I could try to do would be to open up an image editor (Like [Photoshop](http://www.photoshop.com), [Pixelmater](http://www.pixelmator.com/pro/) or [Acorn](https://flyingmeat.com/acorn/)) and add text to them one at a time (although I'm sure there is a way to script them) but I decided to see if I could do it using Python. Turns out, I can. This code will take the background images from my `/Users/Ryan/Relay 5K Backgrounds/` directory and spit them out into a subdirectory called `Podcasting` from PIL import Image, ImageStat, ImageFont, ImageDraw from os import listdir from os.path import isfile, join # Declare Text Attributes TextFontSize = 400 TextFontColor = (128,128,128) font = ImageFont.truetype("~/Library/Fonts/Inconsolata.otf", TextFontSize) mypath = '/Users/Ryan/Relay 5K Backgrounds/' onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] onlyfiles.remove('.DS_Store') rows = len(onlyfiles) for i in range(rows): img = Image.open(mypath+onlyfiles[i]) width, height = img.size draw = ImageDraw.Draw(img)… | 2017-09-17 | I'm a big fan of [podcasts](http://www.ryancheley.com/podcasts-i-like/). I've been listening to them for 4 or 5 years now. One of my favorite Podcast Networks, [Relay](http://www.relay.fm) just had their second anniversary. They offer memberships and after listening to hours and hours of _All The Great Shows_ I decided that I needed to … | Making Background Images | https://www.ryancheley.com/2017/09/17/making-background-images/ |
making-it-easy-to-ssh-into-a-remote-server | ryan | technology | Logging into a remote server is a drag. Needing to remember the password (or get it from [1Password](https://1password.com)); needing to remember the IP address of the remote server. Ugh. It’d be so much easier if I could just ssh username@servername and get into the server. And it turns out, you can. You just need to do two simple things. ## Simple thing the first: Update the `hosts` file on your local computer to map the IP address to a memorable name. The `hosts` file is located at `/etc/hosts` (at least on *nix based systems). Go to the hosts file in your favorite editor … my current favorite editor for simple stuff like this is vim. Once there, add the IP address you don’t want to have to remember, and then a name that you will remember. For example: 67.176.220.115 easytoremembername One thing to keep in mind, you’ll already have some entries in this file. Don’t mess with them. Leave them there. Seriously … it’ll be better for everyone if you do. ## Simple thing the second: Generate a public-private key and share the public key with the remote server From the terminal run the command `ssh-keygen -t rsa`. This will generate a public and private key. You will be asked for a location to save the keys to. The default (on MacOS) is `/Users/username/.ssh/id_rsa`. I tend to accept the default (no reason not to) and leave the passphrase blank (this means you won’t have to enter a password which is what we’re looking for in the first place!) Next, we copy the public key to the host(s) you want to access using the command ssh-copy-id <username>@<hostname> for example: ssh-copy-id pi@rpicamera The first time you do this you will get a message asking you if you’re sure you want to do this. Type in `yes` and you’re good to go. One thing to note, doing this updates the file `known_hosts`. If, for some reason, the server you are ssh-ing to needs to be rebuilt (i.e. you have to keep destroying your Digital Ocean Ubuntu server be… | 2018-05-05 | Logging into a remote server is a drag. Needing to remember the password (or get it from [1Password](https://1password.com)); needing to remember the IP address of the remote server. Ugh. It’d be so much easier if I could just ssh username@servername and get into the server. And it turns … | Making it easy to ssh into a remote server | https://www.ryancheley.com/2018/05/05/making-it-easy-to-ssh-into-a-remote-server/ |
making-it-easy-to-ssh-into-a-remote-server-addendum | ryan | technology | I recently got a new raspberry pi (yes, I might have a problem) and wanted to be able to ssh into it without having to remember the IP or password. Luckily I wrote [this helpful post](/making-it-easy-to-ssh-into-a-remote-server.html) several months ago. While it go me most of the way there, I did run into a slight issue. ## First Issue The issue was that I had a typo for the command to generate a key. I had: `ssh-keyken -t rsa` Which should have been: `ssh-keygen -t rsa` When I copied and pasted the original command the terminal said there was no such command. 🤦♂️ ## Second Issue Once that go cleared up I went through the steps and was able to get everything set up. Or so I thought. On attempting to ssh into my new pi I was greeted with a password prompt. WTF? The first thing I did was to check to see what keys were in my \~/.ssh folder. Sure enough there were a couple of them in there. ls ~/.ssh id_rsa id_rsa.github id_rsa.github.pub id_rsa.pub known_hosts read_only_key read_only_key.pub Next, I interrogated the help command for `ssh-copy-id` to see what flags were available. Usage: /usr/bin/ssh-copy-id [-h|-?|-f|-n] [-i [identity_file]] [-p port] [[-o <ssh -o options>] ...] [user@]hostname -f: force mode -- copy keys without trying to check if they are already installed -n: dry run -- no keys are actually copied -h|-?: print this help I figured let’s try the `-n` flag and get the output from that. Doing so gave me ryan@Ryans-MBP:~/Desktop$ ssh-copy-id -n pi@newpi /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/Users/ryan/.ssh/id_rsa.github.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system. (if you think this is a mistake, you may want to use -f option) OK … w… | 2019-03-25 | I recently got a new raspberry pi (yes, I might have a problem) and wanted to be able to ssh into it without having to remember the IP or password. Luckily I wrote [this helpful post](/making-it-easy-to-ssh-into-a-remote-server.html) several months ago. While it go me most of the way there, I did … | Making it easy to ssh into a remote server: Addendum | https://www.ryancheley.com/2019/03/25/making-it-easy-to-ssh-into-a-remote-server-addendum/ |
migrating-django-tailwind-cli-to-django-commons | ryan | technology | On Tuesday October 29 I worked with [Oliver Andrich](https://github.com/oliverandrich/), [Daniel Moran](https://github.com/cunla/) and [Storm Heg](https://github.com/Stormheg) to migrate Oliver's project [django-tailwind-cli](https://github.com/django- commons/django-tailwind-cli) from Oliver's GitHub project to Django Commons. This was the 5th library that has been migrated over, but the first one that I 'lead'. I was a bit nervous. The Django Commons docs are great and super helpful, but the first time you do something, it can be nerve wracking. One thing that was super helpful was knowing that Daniel and Storm were there to help me out when any issues came up. The first set up steps are pretty straight forward and we were able to get through them pretty quickly. Then we ran into an issue that none of us had seen previously. `django-tailwind-cli` had initially set up GitHub Pages set up for the docs, but migrated to use [Read the Docs](https://about.readthedocs.com/). However, the GitHub pages were still set in the repo so when we tried to migrate them over we ran into an error. Apparently you can't remove GitHub pages using Terraform (the process that we use to manage the organization). We spent a few minutes trying to parse the error, make some changes, and try again (and again) and we were able to finally successfully get the migration completed 🎉 Some other things that came up during the migration was a maintainer that was set in the front end, but not in the terraform file. Also, while I was making changes to the Terraform file locally I ran into an issue with an update that had been done in the GitHub UI on my branch which caused a conflict for me locally. I've had to deal with this kind of thing before, but ... never with an audience! Trying to work through the issue was a bit stressful to say the least 😅 But, with the help of Daniel and Storm I was able to resolve the conflicts and get the code pushed up. As of this writing we have [6 libraries](https://github.com/orgs/django- commons/repositor… | 2024-11-20 | On Tuesday October 29 I worked with [Oliver Andrich](https://github.com/oliverandrich/), [Daniel Moran](https://github.com/cunla/) and [Storm Heg](https://github.com/Stormheg) to migrate Oliver's project [django-tailwind-cli](https://github.com/django- commons/django-tailwind-cli) from Oliver's GitHub project to Django Commons. This was the 5th library that has been migrated over, but the first one that I 'lead'. I was a bit nervous. The Django … | Migrating django-tailwind-cli to Django Commons | https://www.ryancheley.com/2024/11/20/migrating-django-tailwind-cli-to-django-commons/ |
migrating-to-pelican-from-wordpress | ryan | technology | ## A little back story In October of 2017 I [wrote about how I migrated from SquareSpace to Wordpress](https://www.ryancheley.com/2017/10/01/migrating-from-square-space- to-word-press/). After almost 4 years I’ve decided to migrate again, this time to [Pelican](https://blog.getpelican.com). I did a bit of work with Pelican during my [100 Days of Web Code](https://www.ryancheley.com/2019/08/31/my- first-project-after-completing-the-100-days-of-web-in-python/) back in 2019. A good question to ask is, “why migrate to a new platform” The answer, is that while writing my post [Debugging Setting up a Django Project](https://www.ryancheley.com/2021/06/13/debugging-setting-up-a-django- project/) I had to go back and make a change. It was the first time I’d ever had to use the WordPress Admin to write anything ... and it was awful. My writing and posting workflow involves [Ulysses](https://ulysses.app) where I write everything in MarkDown. Having to use the WYSIWIG interface and the ‘blocks’ in WordPress just broke my brain. That meant what should have been a slight tweak ended up taking me like 45 minutes. I decided to give Pelican a shot in a local environment to see how it worked. And it turned out to work very well for my brain and my writing style. ## Setting it up I set up a local instance of Pelican using the [Quick Start](https://docs.getpelican.com/en/latest/quickstart.html "Quick Start") guide in the docs. Pelican has a CLI utility that converts the xml into Markdown files. This allowed me to export my Wordpress blog content to it’s XML output and save it in the Pelican directory I created. I then ran the command: pelican-import --wp-attach -o ./content ./wordpress.xml This created about 140 .md files Next, I ran a few `Pelican` commands to generate the output: pelican content and then the local web server: pelican --listen I reviewed the page and realized there was a bit of clean up that needed to be done. I had categories of Blog posts tha… | 2021-07-02 | ## A little back story In October of 2017 I [wrote about how I migrated from SquareSpace to Wordpress](https://www.ryancheley.com/2017/10/01/migrating-from-square-space- to-word-press/). After almost 4 years I’ve decided to migrate again, this time to [Pelican](https://blog.getpelican.com). I did a bit of work with Pelican during my [100 Days of Web Code](https://www.ryancheley.com/2019/08/31/my- first-project-after-completing-the-100-days-of-web-in-python/) back in 2019 … | Migrating to Pelican from Wordpress | https://www.ryancheley.com/2021/07/02/migrating-to-pelican-from-wordpress/ |
mischief-managed | ryan | technology | A few weeks back I decided to try and update my Python version with Homebrew. I had already been through an issue where the an update like this was going to cause an issue, but I also knew what the fix [was](/fixing-a-pycharm-issue- when-updating-python-made-via-homebrew/ "Homebrew and PyCharm don’t mix"). With this knowledge in hand I happily performed the update. To my surprise, 2 things happened: 1. The update seemed to have me go from Python 3.7.6 to 3.7.3 2. When trying to reestablish my `Virtual Environment` two packages wouldn’t installed: `psycopg2` and `django-heroku` Now, the update/backdate isn’t the end of the world. Quite honestly, next weekend I’m going to just ditch homebrew and go with the standard download from [Python.org](https://www.python.org "Python") because I’m hoping that this non-sense won’t be an issue anymore The second issue was a bit more irritating though. I spent several hours trying to figure out what the problem was, only to find out, there wasn’t one really. The ‘fix’ to the issue was to 1. Open PyCharm 2. Go to Setting 3. Go to ‘Project Interpreter’ 4. Click the ‘+’ to add a package 5. Look for the package that wouldn’t install 6. Click ‘Install Package’ 7. Viola ... [mischief managed](https://www.hp-lexicon.org/magic/mischief-managed/) The next time this happens I’m just buying a new computer | 2020-02-10 | A few weeks back I decided to try and update my Python version with Homebrew. I had already been through an issue where the an update like this was going to cause an issue, but I also knew what the fix [was](/fixing-a-pycharm-issue- when-updating-python-made-via-homebrew/ "Homebrew and PyCharm don’t mix"). With this knowledge in hand I happily performed … | Mischief Managed | https://www.ryancheley.com/2020/02/10/mischief-managed/ |
monitoring-the-temperature-of-my-raspberry-pi-camera | ryan | technology | In late April of this year I wrote a script that would capture the temperature of the Raspberry Pi that sits above my Hummingbird feeder and log it to a file. It’s a straight forward enough script that captures the date, time and temperature as given by the internal `measure_temp` function. In code it looks like this: MyDate="`date +'%m/%d/%Y, %H:%M, '`" MyTemp="`/opt/vc/bin/vcgencmd measure_temp |tr -d "=temp'C"`" echo "$MyDate$MyTemp" >> /home/pi/Documents/python_projects/temperature/temp.log I haven’t ever really done anything with the file, but one thing I wanted to do was to get alerted if (when) the temperature exceeded the recommended level of 70 C. To do this I installed `ssmtp` onto my Pi using `apt-get` sudo apt-get install ssmtp With that installed I am able to send an email using the following command: echo "This is the email body" | mail -s "This is the subject" user@domain.tld With this tool in place I was able to attempt to send an alert if (when) the Pi’s temperature got above 70 C (the maximum recommended running temp). At first, I tried adding this code: if [ "$MyTemp" -gt "70" ]; then echo "Camera Pi Running Hot" | mail -s "Warning! The Camera Pi is Running Hot!!!" user@domain.tld fi Where the `$MyTemp` came from the above code that gets logged to the temp.log file. It didn’t work. The problem is that the temperature I’m capturing for logging purposes is a float, while the item it was being compared to was an integer. No problem, I’ll just make the “70” into a “70.0” and that will fix the ... oh wait. That didn’t work either. OK. I tried various combinations, trying to see what would work and finally determined that there is a way to get the temperature as an integer, but it meant using a different method to capture it. This is done by adding this line: ComparisonTemp=$(($(cat /sys/class/thermal/thermal_zone0/temp)/1000)) The code above gets the temperature as an inte… | 2018-12-04 | In late April of this year I wrote a script that would capture the temperature of the Raspberry Pi that sits above my Hummingbird feeder and log it to a file. It’s a straight forward enough script that captures the date, time and temperature as given by the internal … | Monitoring the temperature of my Raspberry Pi Camera | https://www.ryancheley.com/2018/12/04/monitoring-the-temperature-of-my-raspberry-pi-camera/ |
moving-my-pycharm-directory-or-how-i-spent-my-saturday-after-jacking-up-my-pycharm-environment | ryan | technology | Every once in a while I get a wild hair and decide that I need to ‘clean up’ my directories. This **never** ends well and I almost always mess up something, but I still do it. Why? I’m not sure, except that I _forget_ that I’ll screw it up. 🤦♂️ Anyway, on a Saturday morning when I had nothing but time I decided that I’d move my PyCharm directory from /Users/ryan/PyCharm to /Users/ryan/Documents/PyCharm for no other reason than **because**. I proceeded to use the command line to move the folder mv /Users/ryan/PyCharm/ /Users/ryan/Documents/PyCharm/ Nothing too big, right. Just a simple file movement. Not so much. I then tried to open a project in PyCharm and it promptly freaked out. Since I use virtual environments for my Python Project AND they tend to have paths that reference where they exist, suddenly ALL of my virtual environments were kind of just _gone_. Whoops! OK. No big deal. I just undid my move mv /Users/ryan/Documents/PyCharm/ /Users/ryan/PyCharm That should fix me up, right? Well, mostly. I had to re-register the virtual environments and reinstall all of the packages in my projects (mostly not a big deal with PyCharm) but holy crap it was scary. I thought I had hosed my entire set of projects (not that I have anything that’s critical … but still). Anyway, this is mostly a note to myself. > > The next time you get a wild hair to move stuff around, just keep it where > it is. There’s no reason for it (unless there is). But seriously, ask yourself first, “If I don’t move this what will happen?” If the answer is anything less than “Something awful” go watch a baseball game, or go to the pool, or write some code. Don’t mess with your environment unless you really want to spend a couple of hours unmasking it up! | 2018-08-12 | Every once in a while I get a wild hair and decide that I need to ‘clean up’ my directories. This **never** ends well and I almost always mess up something, but I still do it. Why? I’m not sure, except that I _forget_ that I’ll screw it … | Moving my Pycharm Directory or How I spent my Saturday after jacking up my PyCharm environment | https://www.ryancheley.com/2018/08/12/moving-my-pycharm-directory-or-how-i-spent-my-saturday-after-jacking-up-my-pycharm-environment/ |
my-experience-with-the-100-days-of-web-in-python | ryan | technology | As soon as I discovered the Talk Python to me Podcast, I discovered the Talk Python to me courses. Through my job I have a basically free subscription to PluralSight so I wasn’t sure that I needed to pay for the courses when I was effectively getting courses in Python for free. After taking a couple ( well, truth be told, all ) of the Python courses at PluralSight, I decided, what the heck, the courses at Talk Python looked interesting, Michael Kennedy has a good instructor’s voice and is genuinely excited about Python, and if it didn’t work out, it didn’t work out. I’m so glad I did, and I’m so glad I went through the 100 Days of Web in Python course. On May 2, 2019 I saw that the course had been released and I [tweeted](https://mobile.twitter.com/ryancheley/status/1124127232262152192 "This!") > > This x 1000000! Thank you so much \@TalkPython. I can’t wait to get > started! I started on the course on May 4, 2019 and completed it August 11, 2019. Full details on the course are [here](https://training.talkpython.fm/courses/details/100-days-of-web-in- python "#100DaysOfWeb in Python"). Of the 28 concepts that were reviewed over the course, my favorites things were learning [Django](https://www.djangoproject.com "Django Project") and [Django Rest Framework](https://www.django-rest-framework.org "DRF") and [Pelican](https://blog.getpelican.com "Pelican"). Holy crap, those parts were just so much fun for me. Part of my interest in Django and DRF comes from [William S Vincent’s books](https://wsvincent.com/books/ "Will Vincent Books") and Podcast [Django Chat](https://djangochat.com "Django Chat"), but having actual videos to watch to get me through some of the things that have been conceptually tougher for me was a godsend. The other part that I really liked was actual deployment to a server. I had tried (about 16 months ago) to deploy a Django app to Digital Ocean and it was an unmitigated disaster. No static files no matter what I did. I eventually gave up. In this course I really learned how to deploy to b… | 2019-08-18 | As soon as I discovered the Talk Python to me Podcast, I discovered the Talk Python to me courses. Through my job I have a basically free subscription to PluralSight so I wasn’t sure that I needed to pay for the courses when I was effectively getting courses in … | My Experience with the 100 Days of Web in Python | https://www.ryancheley.com/2019/08/18/my-experience-with-the-100-days-of-web-in-python/ |
my-first-commit-to-an-open-source-project-django | ryan | technology | Last September the annual Django Con was held in San Diego. I **really** wanted to go, but because of other projects and conferences for my job, I wasn’t able to make it. The next best thing to to watch the [videos from DjangoCon on YouTube](https://www.youtube.com/playlist?list=PL2NFhrDSOxgXXUMIGOs8lNe2B-f4pXOX-). I watched a couple of the videos, but one that really caught my attention was by [Carlton Gibson](https://github.com/carltongibson) titled “[Your Web Framework Needs You: An Update by Carlton Gibson](https://www.youtube.com/watch?v=LjTRSH0pNBo)”. I took what Carlton said to heart and thought, I really should be able to do _something_ to help. I went to the [Django Issues site](https://code.djangoproject.com/) and searched for an **Easy Pickings** issue that involved documentation and found [issue 31006 “Document how to escape a date/time format character for the |date and |time filters.”](https://code.djangoproject.com/ticket/31006) I read the [steps on what I needed to do to submit a pull request](https://docs.djangoproject.com/en/dev/internals/contributing/writing- code/working-with-git/#publishing-work), but since it was my first time **ever** participating like this … I was a bit lost. Luckily there isn’t anything that you can break, so I was able to wonder around for a bit and get my bearings. I forked the GitHub repo and I cloned it locally. I then spent an **embarrassingly** long time trying to figure out where the change was going to need to be made, and exactly what needed to change. Finally, with my changes made, I [pushed my code changes](https://github.com/django/django/pull/12128#issue-344767579) to GitHub and waited. Within a few hours [Mariusz Felisiak replied back](https://github.com/django/django/pull/12128#issuecomment-557804299) and asked about a suggestion he had made (but which I missed). I dug back into the documentation, found what he was referring to, and made (what I thought) was his suggested change. Another push and a bit more waiting. Mariusz Felisiak replied back… | 2019-12-07 | Last September the annual Django Con was held in San Diego. I **really** wanted to go, but because of other projects and conferences for my job, I wasn’t able to make it. The next best thing to to watch the [videos from DjangoCon on YouTube](https://www.youtube.com/playlist?list=PL2NFhrDSOxgXXUMIGOs8lNe2B-f4pXOX-). I watched a couple … | My first commit to an Open Source Project: Django | https://www.ryancheley.com/2019/12/07/my-first-commit-to-an-open-source-project-django/ |
my-first-django-project | ryan | technology | I've been writing code for about 15 years (on and off) and Python for about 4 or 5 years. With Python it's mostly small scripts and such. I’ve never considered myself a ‘real programmer’ (Python or otherwise). About a year ago, I decided to change that (for Python at the very least) when I set out to do [100 Days Of Web in Python](https://training.talkpython.fm/courses/details/100-days-of-web-in- python) from [Talk Python To Me](https://talkpython.fm/home). Part of that course were two sections taught by [Bob](https://pybit.es/author/bob.html) regarding [Django](https://www.djangoproject.com). I had tried learn [Flask](https://flask.palletsprojects.com/en/1.1.x/) before and found it ... overwhelming to say the least. Sure, you could get a ‘hello world’ app in 5 lines of code, but then what? If you wanted to do just about anything it required ‘something’ else. I had tried Django before, but wasn't able to get over the 'hump' of deploying. Watching the Django section in the course made it just click for me. Finally, a tool to help me make AND deploy something! But what? ## The Django App I wanted to create A small project I had done previously was to write a short [script](https://github.com/ryancheley/itfdb) for my Raspberry Pi to tell me when LA Dodger (Baseball) games were on (it also has beloved Dodger Announcer [Vin Scully](https://en.wikipedia.org/wiki/Vin_Scully) say his catch phrase, “It’s time for Dodger baseball!!!”). I love the Dodgers. But I also love baseball. I love baseball so much I have on my bucket list a trip to visit all 30 MLB stadia. Given my love of baseball, and my new found fondness of Django, I thought I could write something to keep track of visited stadia. I mean, how hard could it _really_ be? ## What does it do? My Django Site uses the [MLB API](https://statsapi.mlb.com) to search for games and allows a user to indicate a game seen in person. This allows them to track which stadia you've been to. My site is composed of 4 apps: * Users * Content * API * Stadium Tracker … | 2020-05-02 | I've been writing code for about 15 years (on and off) and Python for about 4 or 5 years. With Python it's mostly small scripts and such. I’ve never considered myself a ‘real programmer’ (Python or otherwise). About a year ago, I decided to change that (for Python at … | My First Django Project | https://www.ryancheley.com/2020/05/02/my-first-django-project/ |
my-first-project-after-completing-the-100-days-of-web-in-python | ryan | technology | As I mentioned in my last post, after completing the 100 Days of Web in Python I was moving forward with a Django app I wrote. I pushed up my first version to Heroku on August 24. At that point it would allow users to add a game that they had seen, but when it disaplyed the games it would show a number (the game’s ID) instead of anything useful. A few nights ago (Aug 28) I committed a version which allows the user to see which game they add, i.e. there are actual human readable details versus just a number! The page can be found [here](https://www.stadiatracker.com). It feels really good to have it up in a place where people can actually see it. That being said I discovered a a couple of things on the publish that I’d like to fix. I have a method that returns details about the game. One problem is that if any of the elements return `None` then the front page returns a Server 500 error ... this is not good. It took a bit of googling to see what the issue was. The way I found the answer was to see an idea to turn Debug to True on my ‘prod’ server and see the output. That helped me identify the issue. To ‘fix’ it in the short term I just deleted all of the data for the games seen in the database. I’m glad that it happened because it taught me some stuff that I knew I needed to do, but maybe didn’t pay enough attention to ... like writing unit tests. Based on that experience I wrote out a roadmap of sorts for the updates I want to get into the app: * Tests for all classes and methods * Ability to add minor league games * Create a Stadium Listing View * More robust search tool that allows a single team to be selected * Logged in user view for only their games * Create a List View of games logged per stadium * Create a List View of attendees (i.e. users) at games logged * Add more user features: * Ability to add a picture * Ability to add Twitter handle * Ability to add Instagram handle * Ability to add game notes * Create a Heroku Pipeline to ensure that pushes to PROD are do… | 2019-08-31 | As I mentioned in my last post, after completing the 100 Days of Web in Python I was moving forward with a Django app I wrote. I pushed up my first version to Heroku on August 24. At that point it would allow users to add a game that they … | My first project after completing the 100 Days of Web in Python | https://www.ryancheley.com/2019/08/31/my-first-project-after-completing-the-100-days-of-web-in-python/ |
my-first-python-package | ryan | technology | A few months ago I was inspired by [Simon Willison](https://simonwillison.net "Simon, creator of Datasette") and his project [Datasette](https://datasette.io "Datasette - An awesome tool for data exploration and publishing") and it’s related ecosystem to write a Python Package for it. I use [toggl](https://toggl.com "Toggl - a time tracking tool") to track my time at work and I thought this would be a great opportunity use that data with [Datasette](https://datasette.io "Datasette - An awesome tool for data exploration and publishing") and see if I couldn’t answer some interesting questions, or at the very least, do some neat data discovery. The purpose of this package is to: > Create a SQLite database containing data from your [toggl](https://toggl.com > "Toggl - a time tracking tool") account I followed the [tutorial for committing a package to PyPi](https://packaging.python.org/tutorials/packaging-projects/ "How do I add a package to PyPi?") and did the first few pushes manually. Then, using a GitHub action from one of Simon’s [Datasette](https://datasette.io "Datasette - An awesome tool for data exploration and publishing") projects, I was able to automate it when I make a release on GitHub! Since the initial commit on March 7 (my birthday BTW) I’ve had 10 releases, with the most recent one coming yesterday which removed an issue with one of the tables reporting back an API key which, if published on the internet could be a bad thing ... so hooray for security enhancements! Anyway, it was a fun project, and got me more interested in authoring Python packages. I’m hoping to do a few more related to [Datasette](https://datasette.io) (although I’m not sure what to write honestly!). Be sure to check out the package on [PyPi.org](https://pypi.org/project/toggl- to-sqlite/ "toggl-to-SQLite") and the source code on [GitHub](https://github.com/ryancheley/toggl-to-sqlite/ "GitHub repo of toggl- to-sqlite"). | 2021-06-06 | A few months ago I was inspired by [Simon Willison](https://simonwillison.net "Simon, creator of Datasette") and his project [Datasette](https://datasette.io "Datasette - An awesome tool for data exploration and publishing") and it’s related ecosystem to write a Python Package for it. I use [toggl](https://toggl.com "Toggl - a time tracking tool") to track my time at work and I thought this would be a great opportunity use that data with [Datasette](https://datasette.io "Datasette - An awesome tool for data exploration and publishing") and … | My First Python Package | https://www.ryancheley.com/2021/06/06/my-first-python-package/ |
my-mac-session-with-apple | ryan | technology | For Christmas I bought myself a 2017 13-inch MacBook Pro with Touch Bar. Several bonuses were associated with the purchase: 1. A \$150 Apple Gift Card because I bought the MacBook Pro on Black Friday and Apple had a special going (w00t!) 2. The Credit Card I use to make **ALL** of my purchases at Apple has a 3% cash back (in the form of iTunes cards) 3. A free 30 minute online / phone session with an ‘Apple Specialist’ Now I didn’t know about item number 3 when I made the purchase, but was greeted with an email informing me of my great luck. This is my fifth Mac1 and I don’t remember ever getting this kind of service before. So I figured, what the hell and decided to snooze the email until the day after Christmas to remind myself to sign up for the session. When I entered the session I was asked to optionally provide some information about myself. I indicated that I had been using a Mac for several years and considered myself an intermediate user. My Apple ‘Specialist’ was _[Jaime](http://gameofthrones.wikia.com/wiki/Jaime_Lannister "No ... not that one")_. She confirmed the optional notes that I entered and we were off to the races. Now a lot of what she told me about Safari (blocking creepy tracking behavior, ability to mute sound from auto play videos, default site to display in reader view) I knew from the [WWDC Keynote](https://developer.apple.com/videos/play/wwdc2017/101/ "WWDC Keynote") that I watched back in June, but I listened just in case I had missed something from that session (or the [10s / 100s of hours of podcasts](https://relay.fm "All the Great Shows!") I listened to about the Keynote). One thing that I had heard about was the ability to _pin_ tabs in Safari. I never really knew what that meant and figured it wasn’t anything that I needed. I was wrong. Holy crap is [pinning tabs in Safari](https://www.youtube.com/watch?v=k-ssw5MKAno "Pinning Tabs!") a useful feature! I can keep all of my most used sites pinned and get to them really quickly and they get auto refreshed! Sweet! The … | 2017-12-27 | For Christmas I bought myself a 2017 13-inch MacBook Pro with Touch Bar. Several bonuses were associated with the purchase: 1. A \$150 Apple Gift Card because I bought the MacBook Pro on Black Friday and Apple had a special going (w00t!) 2. The Credit Card I use to make **ALL** of … | My Mac session with Apple | https://www.ryancheley.com/2017/12/27/my-mac-session-with-apple/ |
my-map-art-project | ryan | technology | I’d discovered a python package called `osmnx` which will take GIS data and allow you to draw maps using python. Pretty cool, but I wasn’t sure what I was going to do with it. After a bit of playing around with it I finally decided that I could make some pretty cool [Fractures](https://www.fractureme.com "Fracture"). I’ve got lots of Fracture images in my house and I even turned my diplomas into Fractures to hang up on the wall at my office, but I hadn’t tried to make anything like this before. I needed to figure out what locations I was going to do. I decided that I wanted to do 9 of them so that I could create a 3 x 3 grid of these maps. I selected 9 cities that were important to me and my family for various reasons. Next writing the code. The script is 54 lines of code and doesn’t really adhere to PEP8 but that just gives me a chance to do some reformatting / refactoring later on. In order to get the desired output I needed several libraries: osmnx (as I’d mentioned before) matplotlib.pyplot numpy PIL If you’ve never used PIL before it’s the ‘Python Image Library’ and according to it’s [home page](http://www.pythonware.com/products/pil/ "Python Image Library Home Page") it > adds image processing capabilities to your Python interpreter. This library > supports many file formats, and provides powerful image processing and > graphics capabilities. OK, let’s import some libraries! import osmnx as ox, geopandas as gpd, os import matplotlib.pyplot as plt import numpy as np from PIL import Image from PIL import ImageFont from PIL import ImageDraw Next, we establish the configurations: ox.config(log_file=True, log_console=False, use_cache=True) The `ox.config` allows you to specify several options. In this case, I’m: 1. Specifying that the logs be saved to a file in the log directory 2. Suppress the output of the log file to the console (this is helpful to have set to `True` when you’re first running the script… | 2018-01-12 | I’d discovered a python package called `osmnx` which will take GIS data and allow you to draw maps using python. Pretty cool, but I wasn’t sure what I was going to do with it. After a bit of playing around with it I finally decided that I could … | My Map Art Project | https://www.ryancheley.com/2018/01/12/my-map-art-project/ |
my–first–python-script-that-does-something | ryan | technology | I've been interested in python as a tool for a while and today I had the chance to try and see what I could do. With my 12.9 iPad Pro set up at my desk, I started out. I have [Ole Zorn's Pythonista 3](http://omz-software.com/pythonista/) installed so I started on my first script. My first task was to scrape something from a website. I tried to start with a website listing doctors, but for some reason the html rendered didn't include anything useful. So the next best thing was to find a website with staff listed on it. I used my dad's company and his [staff listing](http://www.graphtek.com/Our-Team) as a starting point. I started with a quick Google search to find Pythonista Web Scrapping and came across [this](https://forum.omz-software.com/topic/1513/screen-scraping) post on the Pythonista forums. That got me this much of my script: import bs4, requests myurl = 'http://www.graphtek.com/Our-Team' def get_beautiful_soup(url): return bs4.BeautifulSoup(requests.get(url).text, "html5lib") soup = get_beautiful_soup(myurl) Next, I needed to see how to start traversing the html to get the elements that I needed. I recalled something I read a while ago and was (luckily) able to find some [help](https://first-web-scraper.readthedocs.io/en/latest/). That got me this: `tablemgmt = soup.findAll('div', attrs={'id':'our-team'})` This was close, but it would only return 2 of the 3 `div` tags I cared about (the management team has a different id for some reason ... ) I did a search for regular expressions and Python and found this useful [stackoverflow](http://stackoverflow.com/questions/24748445/beautiful-soup- using-regex-to-find-tags) question and saw that if I updated my imports to include `re` then I could use regular expressions. Great, update the imports section to this: `import bs4, requests, re` And added `re.compile` to my `findAll` to get this: `tablemgmt = soup.findAll('div', attrs={'id':re.compile('our-team')})` Now I had all 3 of the `div` tags … | 2016-10-15 | I've been interested in python as a tool for a while and today I had the chance to try and see what I could do. With my 12.9 iPad Pro set up at my desk, I started out. I have [Ole Zorn's Pythonista 3](http://omz-software.com/pythonista/) installed so I started on … | My First Python Script that does 'something' | https://www.ryancheley.com/2016/10/15/my–first–python-script-that-does-something/ |
new-apple-watch | ryan | technology | New Watch ## The first week I've been rocking a series 2 Apple Watch for about 18 months. I timed my purchase just right to not get a series 3 when it went on sale (🤦🏻♂️). When the series 4 was released I decided that I wanted to get one, but was a bit too slow (and tired) to stay up and order one at launch. This meant that I didn't get my new Apple Watch until last Saturday (nearly5 weeks later). I wanted to write down my thoughts on the Watch and what it's meant for me. I won't go into specs and details, just what I've found that I liked and didn't like. ## The Good Holy crap is it fast. I mean, like really fast. I've never had a watch that responded like this (before my series 2 I had a series 0). It reacts when I want it to, so much so that I'm sometimes not prepared. It reminds me of the transition from Touch ID Gen 1 to Touch ID Gen 2. I really appreciate how fast everything comes up. When I start an activity, it’s there (no more waiting like on Series 2). When I want to pair with my AirPods … it’s there and ready to go. I also really like how much thinner it is and the increase in size. At first I thought it was ‘monstrous’ but now I’m trying to figure out how I ever lived with 2 fewer millimeters. I also decided to get the Cellular Version just in case. It was a bit more expensive, and I probably won’t end up using it past the free trial I got, but it’s nice to know that I can have it if I need it. I haven’t had a chance to use it (yet) but hopefully I’ll get a chance here soon. ## The Bad So far, nothing has stuck me as being ‘bad’. It’s the first Apple Watch I’ve had that’s really exceeded my expectations in terms of performance and sheer joy that I get out of using it. ## Conclusion Overall I **love** the Series 4 Watch. It doesn’t do anything different than the Series 2 that I had (except I can make phone calls without my phone if I need to) but _oh my_ is it fast! If someone is on a Series 2 and is wondering if jumping to the Series 4 is worth it … it totally is. | 2018-11-03 | New Watch ## The first week I've been rocking a series 2 Apple Watch for about 18 months. I timed my purchase just right to not get a series 3 when it went on sale (🤦🏻♂️). When the series 4 was released I decided that I wanted to get one, but was … | New Watch | https://www.ryancheley.com/2018/11/03/new-apple-watch/ |
pitching-stats-and-python | ryan | technology | I'm an avid [Twitter](https://www.twitter.com) user, mostly as a replacement [RSS](https://en.wikipedia.org/wiki/RSS) feeder, but also because I can't stand [Facebook](https://www.facebook.com) and this allows me to learn about really important world events when I need to and to just stay isolated with [my head in the sand](http://gerdleonhard.typepad.com/.a/6a00d8341c59be53ef013488b614d8970c-800wi) when I don't. It's perfect for me. One of the people I follow on [Twitter](https://twitter.com/drdrang) is [Dr. Drang](http://www.leancrew.com/all-this/) who is an Engineer of some kind by training. He also appears to be a fan of baseball and posted an [analysis of Jake Arrieata's pitching](http://leancrew.com/all-this/2016/09/jake-arrieta- and-python/) over the course of the 2016 MLB season (through September 22 at least). When I first read it I hadn't done too much with Python, and while I found the results interesting, I wasn't sure what any of the code was doing (not really anyway). Since I had just spent the last couple of days learning more about `BeautifulSoup` specifically and `Python` in general I thought I'd try to do two things: 1. Update the data used by Dr. Drang 2. Try to generalize it for any pitcher Dr. Drang uses a flat csv file for his analysis and I wanted to use `BeautifulSoup` to scrape the data from [ESPN](https://www.espn.com) directly. OK, I know how to do that (sort of ¯\ _(ツ)_ /¯) First things first, import your libraries: import pandas as pd from functools import partial import requests import re from bs4 import BeautifulSoup import matplotlib.pyplot as plt from datetime import datetime, date from time import strptime The next two lines I ~~stole~~ borrowed directly from Dr. Drang's post. The first line is to force the plot output to be inline with the code entered in the terminal. The second he explains as such: > > The odd ones are the `rcParams` call, which makes the inline graphs bigger > than the tiny Jupyter default, and the … | 2016-11-21 | I'm an avid [Twitter](https://www.twitter.com) user, mostly as a replacement [RSS](https://en.wikipedia.org/wiki/RSS) feeder, but also because I can't stand [Facebook](https://www.facebook.com) and this allows me to learn about really important world events when I need to and to just stay isolated with [my head in the sand](http://gerdleonhard.typepad.com/.a/6a00d8341c59be53ef013488b614d8970c-800wi) when I don't. It's perfect for … | Pitching Stats and Python | https://www.ryancheley.com/2016/11/21/pitching-stats-and-python/ |
preparing-the-code-for-deployment-to-digital-ocean | ryan | technology | OK, we’ve got our server ready for our Django App. We set up Gunicorn and Nginx. We created the user which will run our app and set up all of the folders that will be needed. Now, we work on deploying the code! ## Deploying the Code There are 3 parts for deploying our code: 1. Collect Locally 2. Copy to Server 3. Place in correct directory Why don’t we just copy to the spot on the server we want o finally be in? Because we’ll need to restart Nginx once we’re fully deployed and it’s easier to have that done in 2 steps than in 1. ### Collect the Code Locally My project is structured such that there is a `deploy` folder which is on the Same Level as my Django Project Folder. That is to say  We want to clear out any old code. To do this we run from the same level that the Django Project Folder is in rm -rf deploy/* This will remove ALL of the files and folders that were present. Next, we want to copy the data from the `yoursite` folder to the deploy folder: rsync -rv --exclude 'htmlcov' --exclude 'venv' --exclude '*__pycache__*' --exclude '*staticfiles*' --exclude '*.pyc' yoursite/* deploy Again, running this form the same folder. I’m using `rsync` here as it has a really good API for allowing me to exclude items (I’m sure the above could be done better with a mix of Regular Expressions, but this gets the jobs done) ### Copy to the Server We have the files collected, now we need to copy them to the server. This is done in two steps. Again, we want to remove ALL of the files in the deploy folder on the server (see rationale from above) ssh root@$SERVER "rm -rf /root/deploy/" Next, we use `scp` to secure copy the files to the server scp -r deploy root@$SERVER:/root/ Our files are now on the server! ### Installing the Code We have several steps to get through in order to install the code. They are: 1. Activate the Virtual Environment 2. Deleting old… | 2021-02-14 | OK, we’ve got our server ready for our Django App. We set up Gunicorn and Nginx. We created the user which will run our app and set up all of the folders that will be needed. Now, we work on deploying the code! ## Deploying the Code There are 3 … | Preparing the code for deployment to Digital Ocean | https://www.ryancheley.com/2021/02/14/preparing-the-code-for-deployment-to-digital-ocean/ |
presenting-data-referee-crew-calls-in-the-nfl | ryan | technology | One of the great things about computers is their ability to take tabular data and turn them into pictures that are easier to interpret. I'm always amazed when given the opportunity to show data as a picture, more people don't jump at the chance. For example, [this piece on ESPN regarding the difference in officiating crews and their calls](http://www.espn.com/blog/nflnation/post/_/id/225804/aaron- rodgers-could-get-some-help-from-referee-jeff-triplette) has some great data in it regarding how different officiating crews call games. One thing I find a bit disconcerting is: 1. ~~One of the rows is missing data so that row looks 'odd' in the context of the story and makes it look like the writer missed a big thing ... they didn't~~ (it's since been fixed) 2. This tabular format is just begging to be displayed as a picture. Perhaps the issue here is that the author didn't know how to best visualize the data to make his story, but I'm going to help him out. If we start from the underlying premise that not all officiating crews call games in the same way, we want to see in what ways they differ. The data below is a reproduction of the table from the article: REFEREE DEF. OFFSIDE ENCROACH FALSE START NEUTRAL ZONE TOTAL * * * Triplette, Jeff 39 2 34 6 81 Anderson, Walt 12 2 39 10 63 Blakeman, Clete 13 2 41 7 63 Hussey, John 10 3 42 3 58 Cheffers, Cartlon 22 0 31 3 56 Corrente, Tony 14 1 31 8 54 Steratore, Gene 19 1 29 5 54 Torbert, Ronald 9 4 31 7 51 Allen, Brad 15 1 28 6 50 McAulay, Terry 10 4 23 12 49 Vinovich, Bill 8 7 29 5 49 Morelli, Peter 12 3 24 9 48 Boger, Jerome 11 3 27 6 47 Wrolstad, Craig 9 1 31 5 46 Hochuli, Ed 5 2 33 4 44 Coleman, Walt 9 2 25 4 40 Parry, John 7 5 20 6 38 The author points out: > > Jeff Triplette's crew has called a combined 81 such penalties -- 18 more > than the next-highest crew and more than twice the amount of two others The author goes on to talk about his interview with [Mike Pereira](https://en.wikipedia.org/wiki/Mike_Pereira) (who happens to be ~~pimping~~ promoting h… | 2016-12-25 | One of the great things about computers is their ability to take tabular data and turn them into pictures that are easier to interpret. I'm always amazed when given the opportunity to show data as a picture, more people don't jump at the chance. For example, [this piece on ESPN …](http://www.espn.com/blog/nflnation/post/_/id/225804/aaron-rodgers-could- get-some-help-from-referee-jeff-triplette) | Presenting Data - Referee Crew Calls in the NFL | https://www.ryancheley.com/2016/12/25/presenting-data-referee-crew-calls-in-the-nfl/ |
prototyping-with-datasette | ryan | technology | At my job I work with some really talented Web Developers that are saddled with a pretty creaky legacy system. We're getting ready to start on a new(ish) project where we'll be taking an old project built on this creaky legacy system (`VB.net`) and re-implementing it on a `C#` backend and an `Angular` front end. We'll be working on a lot of new features and integrations so it's worth rebuilding it versus shoehorning the new requirements into the legacy system. The details of the project aren't really important. What is important is that as I was reviewing the requirements with the Web Developer Supervisor he said something to the effect of, "We can create a proof of concept and just hard code the data in a json file to fake th backend." The issue is ... we already have the data that we'll need in a MS SQL database (it's what is running the legacy version) it's just a matter of getting it into the right json "shape". Creating a 'fake' json object that kind of/maybe mimics the real data is something we've done before, and it ALWAYS seems to bite us in the butt. We don't account for proper pagination, or the real lengths of data in the fields or NULL values or whatever shenanigans happen to befall real world data! This got me thinking about [Simon Willison](https://simonwillison.net)'s project [Datasette](https://datasette.io) and using it to prototype the API end points we would need. I had been trying to figure out how to use the `db-to-sqlite` to extract data from a MS SQL database into a SQLite database and was successful (see my PR to `db-to-sqlite` [here](https://github.com/ryancheley/db-to- sqlite/tree/ryancheley-patch-1-document-updates#using-db-to-sqlite-with-ms- sql)) With this idea in hand, I reviewed it with the Supervisor and then scheduled a call with the web developers to review `datasette`. During this meeting, I wanted to review: 1. The motivation behind why we would want to use it 2. How we could leverage it to do [Rapid Prototyping](https://datasette.io/for/rapid-prototyping) 3. Giv… | 2021-08-09 | At my job I work with some really talented Web Developers that are saddled with a pretty creaky legacy system. We're getting ready to start on a new(ish) project where we'll be taking an old project built on this creaky legacy system (`VB.net`) and re-implementing it on a … | Prototyping with Datasette | https://www.ryancheley.com/2021/08/09/prototyping-with-datasette/ |
publishing-content-to-pelican-site | ryan | technology | There are a lot of different ways to get the content for your Pelican site onto the internet. The [Docs show](https://docs.getpelican.com/en/latest/publish.html) an example using `rsync`. For automation they talk about the use of either `Invoke` or `Make` (although you could also use [`Just`](https://github.com/casey/just) instead of `Make` which is my preferred command runner.) I didn't go with any of these options, instead opting to use GitHub Actions instead. I have [two GitHub Actions](https://github.com/ryancheley/ryancheley.com/tree/main/.github/workflows) that will publish updated content. One action publishes to a UAT version of the site, and the other to the Production version of the site. Why two actions you might ask? Right now it's so that I can work through making my own theme and deploying it without disrupting the content on my production site. Also, it's a workflow that I'm pretty used to: 1. Local Development 2. Push to Development Branch on GitHub 3. Pull Request into Main on GitHub It kind of complicates things right now, but I feel waaay more comfortable with having a UAT version of my site that I can just undo if I need to. Below is the code for the [Prod Deployment](https://raw.githubusercontent.com/ryancheley/ryancheley.com/main/.github/workflows/publish.yml) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | name: Pelican Publish on: push: branches: - main jobs: deploy: runs-on: ubuntu-18.04 steps: - name: deploy code uses: appleboy/ssh-action@v0.1.2 with: host: ${{ secrets.SSH_HOST }} key: ${{ secrets.SSH_KEY }} username: ${{ secrets.SSH_USERNAME }} script: | rm -rf ryancheley.com git clone git@github.com:ryanch… | 2021-07-07 | There are a lot of different ways to get the content for your Pelican site onto the internet. The [Docs show](https://docs.getpelican.com/en/latest/publish.html) an example using `rsync`. For automation they talk about the use of either `Invoke` or `Make` (although you could also use [`Just`](https://github.com/casey/just) instead of `Make` which is my preferred … | Publishing content to Pelican site | https://www.ryancheley.com/2021/07/07/publishing-content-to-pelican-site/ |
pushing-changes-from-pythonista-to-github-step1 | ryan | technology | With the most recent release of the iOS app [Workflow](https://workflow.is) I was toying with the idea of writing a workflow that would allow me to update / add a file to a [GitHub repo](https://github.com) via a workflow. My thinking was that since [Pythonista](http://omz-software.com/pythonista/) is only running local files on my iPad if I could use a workflow to access the api elements to push the changes to my repo that would be pretty sweet. In order to get this to work I'd need to be able to accomplosh the following things (not necessarily in this order) * Have the workflow get a list of all of the repositories in my GitHub * Get the current contents of the app to the clip board * Commit the changes to the master of the repo I have been able to write a [Workflow](https://workflow.is/workflows/8e986867ff074dbe89c7b0bf9dcb72f5) that will get all of the public repos of a specified github user. Pretty straight forward stuff. The next thing I'm working on getting is to be able to commit the changes from the clip board to a specific file in the repo (if one is specified) otherwise a new file would be created. I really just want to 'have the answer' for this, but I know that the journey will be the best part of getting this project completed. So for now, I continue to read the [GitHub API Documentation](https://developer.github.com/v3/) to discover exactly how to do what I want to do. | 2016-10-29 | With the most recent release of the iOS app [Workflow](https://workflow.is) I was toying with the idea of writing a workflow that would allow me to update / add a file to a [GitHub repo](https://github.com) via a workflow. My thinking was that since [Pythonista](http://omz-software.com/pythonista/) is only running local files on my iPad … | Pushing Changes from Pythonista to GitHub - Step 1 | https://www.ryancheley.com/2016/10/29/pushing-changes-from-pythonista-to-github-step1/ |
receipts | ryan | technology | Every month I set up a budget for my family so that we can track our spending and save money in the ways that we need to while still being able to enjoy life. I have a couple of Siri Shortcuts that will take a picture and then put that picture into a folder in Dropbox. The reason that I have a couple of them is that one is for physical receipts that we got at a store and the other is for online purchases. I’m sure that these couple be combined into one, but I haven’t done that yet. One of the great things about these shortcuts is that they will create the folder that the image will go into if it’s not there. For example, the first receipt of March 2019 will create a folder called **March** in the **2019** folder. If the **2019** folder wasn’t there, it would have created it too. What it doesn’t do is create the sub folder that all of my processed receipts will go into. Each month I need to create a folder called `month_name` Processed. And each month I think, there must be a way I can automate this, but because it doesn’t really take that long I’ve never really done it. Over the weekend I finally had the time to try and write it up and test it out. Nothing too fancy, but it does what I want it to do, and a little more. # create the variables I'm going to need later y=$( date +"%Y" ) m=$( date +"%B" ) p=$( date +"%B_Processed" ) # check to see if the Year folder exists and if it doesn't, create it if [ ! -d /Users/ryan/Dropbox/Family/Financials/$y ]; then mkdir /Users/ryan/Dropbox/Family/Financials/$y fi # check to see if the Month folder exists and if it doesn't, create it if [ ! -d /Users/ryan/Dropbox/Family/Financials/$y/$m ]; then mkdir /Users/ryan/Dropbox/Family/Financials/$y/$m fi #check to see if the Month_Processed folder exists and if it doesn't, create it if [ ! -d "/Users/ryan/Dropbox/Family/Financials/$y/$m/$p" ]; then mkdir "/Users/ryan/Dropbox/Family/Financials/$y/$m/$p" fi The last se… | 2019-03-16 | Every month I set up a budget for my family so that we can track our spending and save money in the ways that we need to while still being able to enjoy life. I have a couple of Siri Shortcuts that will take a picture and then put that … | Receipts | https://www.ryancheley.com/2019/03/16/receipts/ |
setting-the-timezone-on-my-server | ryan | technology | When I scheduled my last post on December 14th to be published at 6pm that night I noticed that the schedule time was a bit … off:  I realized that the server times as still set to GMT and that I had missed the step in the Linode Getting Started guide to Set the Timezone. No problem, just found the Guide, went to [this](https://linode.com/docs/getting-started/#set-the-timezone "Set the Timezone") section and ran the following command: `sudo dpkg-reconfigure tzdata` I then selected my country (US) and my time zone (Pacific-Ocean) and now the server has the right timezone. | 2017-12-15 | When I scheduled my last post on December 14th to be published at 6pm that night I noticed that the schedule time was a bit … off:  I realized that the server times as still set to GMT and that I had missed the step in the Linode Getting Started guide … | Setting the Timezone on my server | https://www.ryancheley.com/2017/12/15/setting-the-timezone-on-my-server/ |
setting-up-itfdb-with-a-voice | ryan | technology | In a [previous post](/itfdb.html) I wrote about my Raspberry Pi experiment to have the SenseHat display a scrolling message 10 minutes before game time. One of the things I have wanted to do since then is have Vin Scully’s voice come from a speaker and say those five magical words, `It's time for Dodger Baseball!` I found a clip of [Vin on Youtube](https://www.youtube.com/watch?v=4KwFuGtGU6c) saying that (and a little more). I wasn’t sure how to get the audio from that YouTube clip though. After a bit of googling1 I found a command line tool called [youtube- dl](https://rg3.github.io/youtube-dl/). The tool allowed me to download the video as an `mp4` with one simple command: youtube-dl https://www.youtube.com/watch?v=4KwFuGtGU6c Once the mp4 was downloaded I needed to extract the audio from the `mp4` file. Fortunately, `ffmpeg` is a tool for just this type of exercise! I modified [this answer from StackOverflow](https://stackoverflow.com/questions/9913032/ffmpeg-to-extract- audio-from-video) to meet my needs ffmpeg -i dodger_baseball.mp4 -ss 00:00:10 -t 00:00:9.0 -q:a 0 -vn -acodec copy dodger_baseball.aac This got me an `aac` file, but I was going to need an `mp3` to use in my Python script. Next, I used a [modified version of this suggestion](https://askubuntu.com/questions/35457/converting-aac-to-mp3-via- command-line) to create write my own command ffmpeg -i dodger_baseball.aac -c:a libmp3lame -ac 2 -b:a 190k dodger_baseball.mp3 I could have probably combined these two steps, but … meh. OK. Now I have the famous Vin Scully saying the best five words on the planet. All that’s left to do is update the python script to play it. Using guidance from [here](https://raspberrypi.stackexchange.com/questions/7088/playing- audio-files-with-python) I updated my `itfdb.py` file from this: if month_diff == 0 and day_diff == 0 and hour_diff == 0 and 0 >= minute_diff >= -10: message = '#ITFDB!!! The Dodgers will be playing {} at {}'.f… | 2018-03-15 | In a [previous post](/itfdb.html) I wrote about my Raspberry Pi experiment to have the SenseHat display a scrolling message 10 minutes before game time. One of the things I have wanted to do since then is have Vin Scully’s voice come from a speaker and say those five magical … | Setting up ITFDB with a voice | https://www.ryancheley.com/2018/03/15/setting-up-itfdb-with-a-voice/ |
setting-up-jupyter-notebook-on-my-linode | ryan | technology | A [Jupyter Notebook](http://jupyter.org) is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: 1. data cleaning and transformation 2. numerical simulation 3. statistical modeling 4. data visualization 5. machine learning 6. and other stuff I’ve been interested in how to set up a Jupyter Notebook on my [Linode](https://www.linode.com) server for a while, but kept running into a roadblock (either mental or technical I’m not really sure). Then I came across this ‘sweet’ solution to get them set up at<http://blog.lerner.co.il/five-minute-guide-setting-jupyter-notebook- server/> My main issue was what I needed to to do keep the Jupyter Notebook running once I disconnected from command line. The solution above gave me what I needed to solve that problem nohup jupyter notebook `nohup` allows you to disconnect from the terminal but keeps the command running in the background (which is exactly what I wanted). The next thing I wanted to do was to have the `jupyter` notebook server run from a directory that wasn’t my home directory. To do this was way easier than I thought. You just run `nohup jupyter notebook` from the directory you want to run it from. The last thing to do was to make sure that the notebook would start up with a server reboot. For that I wrote a shell script # change to correct directory cd /home/ryan/jupyter nohup jupyter notebook &> /home/ryan/output.log The last command is a slight modification of the line from above. I really wanted the output to get directed to a file that wasn’t in the directory that the `Jupyter` notebook would be running from. Not any reason (that I know of anyway) … I just didn’t like the `nohup.out` file in the working directory. Anyway, I now have a running Jupyter Notebook at <http://python.ryancheley.com:8888>1 1. I’d like to update this to be running from a port other than 8888 AND I’d lik… | 2018-05-27 | A [Jupyter Notebook](http://jupyter.org) is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: 1. data cleaning and transformation 2. numerical simulation 3. statistical modeling 4. data visualization 5. machine learning 6. and other stuff I’ve been interested in how to set … | Setting up Jupyter Notebook on my Linode | https://www.ryancheley.com/2018/05/27/setting-up-jupyter-notebook-on-my-linode/ |
setting-up-multiple-django-sites-on-a-digital-ocean-server | ryan | technology | If you want to have more than 1 Django site on a single server, you can. It’s not too hard, and using the Digital Ocean tutorial as a starting point, you can get there. Using [this tutorial](https://www.digitalocean.com/community/tutorials/how-to- set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) as a start, we set up so that there are multiple Django sites being served by `gunicorn` and `nginx`. ## Creating `systemd` Socket and Service Files for Gunicorn The first thing to do is to set up 2 Django sites on your server. You’ll want to follow the tutorial referenced above and just repeat for each. Start by creating and opening two systemd socket file for Gunicorn with sudo privileges: Site 1 sudo vim /etc/systemd/system/site1.socket Site 2 sudo vim /etc/systemd/system/site2.socket The contents of the files will look like this: [Unit] Description=siteX socket [Socket] ListenStream=/run/siteX.sock [Install] WantedBy=sockets.target Where `siteX` is the site you want to server from that socket Next, create and open a systemd service file for Gunicorn with sudo privileges in your text editor. The service filename should match the socket filename with the exception of the extension sudo vim /etc/systemd/system/siteX.service The contents of the file will look like this: [Unit] Description=gunicorn daemon Requires=siteX.socket After=network.target [Service] User=sammy Group=www-data WorkingDirectory=path/to/directory ExecStart=path/to/gunicorn/directory --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock myproject.wsgi:application [Install] WantedBy=multi-user.target Again `siteX` is the socket you want to serve Follow tutorial for testing Gunicorn ## Nginx server { listen 80; server_name server_domain_or_IP; locat… | 2021-03-07 | If you want to have more than 1 Django site on a single server, you can. It’s not too hard, and using the Digital Ocean tutorial as a starting point, you can get there. Using [this tutorial](https://www.digitalocean.com/community/tutorials/how-to- set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) as a start, we set up so that there are multiple Django … | Setting up multiple Django Sites on a Digital Ocean server | https://www.ryancheley.com/2021/03/07/setting-up-multiple-django-sites-on-a-digital-ocean-server/ |
setting-up-the-server-on-digital-ocean | ryan | technology | ## The initial setup Digital Ocean has a pretty nice API which makes it easy to automate the creation of their servers (which they call `Droplets`. This is nice when you’re trying to work towards automation of the entire process (like I was). I won’t jump into the automation piece just yet, but once you have your DO account setup (sign up [here](https://m.do.co/c/cc5fdad15654) if you don’t have one), it’s a simple interface to [Setup Your Droplet](https://www.digitalocean.com/docs/droplets/how-to/create/). I chose the Ubuntu 18.04 LTS image with a \$5 server (1GB Ram, 1CPU, 25GB SSD Space, 1000GB Transfer) hosted in their San Francisco data center (SFO21). ## We’ve got a server … now what? We’re going to want to update, upgrade, and install all of the (non-Python) packages for the server. For my case, that meant running the following: apt-get update apt-get upgrade apt-get install python3 python3-pip python3-venv tree postgresql postgresql-contrib nginx That’s it! We’ve now got a server that is ready to be setup for our Django Project. In the next post, I’ll walk through how to get your Domain Name to point to the Digital Ocean Server. 1. SFO2 is disabled for new customers and you will now need to use SFO3 unless you already have resources on SFO2, but if you’re following along you probably don’t. What’s the difference between the two? Nothing 😁 ↩︎ | 2021-01-31 | ## The initial setup Digital Ocean has a pretty nice API which makes it easy to automate the creation of their servers (which they call `Droplets`. This is nice when you’re trying to work towards automation of the entire process (like I was). I won’t jump into the automation … | Setting up the Server (on Digital Ocean) | https://www.ryancheley.com/2021/01/31/setting-up-the-server-on-digital-ocean/ |
setting-up-the-server-to-host-pelican | ryan | technology | # Creating the user on the server Each site on my server has it's own user. This is a security consideration, more than anything else. For this site, I used the steps from [some of my scripts for setting up a Django site](https://www.ryancheley.com/2021/02/21/automating-the-deployment/). In particular, I ran the following code from the shell on the server: adduser --disabled-password --gecos "" ryancheley adduser ryancheley www-data The first command above creates the user with no password so that they can't actually log in. It also creates the home directory `/home/ryancheley`. This is where the site will be server from. The second commands adds the user to the `www-data` group. I don't think that's strictly necessary here, but in order to keep this user consistent with the other web site users, I ran it to add it to the group. # Creating the nginx config file For the most part I cribbed the `nginx` config files from this [blog post](https://michael.lustfield.net/nginx/blog-with-pelican-and-nginx). There were some changes that were required though. As I indicated in part 1, I had several requirements I was trying to fulfill, most notably not breaking historic links. Here is the config file for my UAT site (the only difference between this and the prod site is the server name on line 3): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | server { server_name uat.ryancheley.com; root /home/ryancheley/output; location / { # Serve a .gz version if it exists gzip_static on; error_page 404 /404.html; rewrite ^/index.php/(.*) /$1 permanent; } location = /favicon.ico { … | 2021-07-05 | # Creating the user on the server Each site on my server has it's own user. This is a security consideration, more than anything else. For this site, I used the steps from [some of my scripts for setting up a Django site](https://www.ryancheley.com/2021/02/21/automating-the-deployment/). In particular, I ran the following code from … | Setting up the Server to host my Pelican Site | https://www.ryancheley.com/2021/07/05/setting-up-the-server-to-host-pelican/ |
setting-up-the-site-with-ssl | ryan | technology | I’ve written about my migration from Squarespace to Wordpress earlier this year. One thing I lost with that migration when I went to Wordpress in AWS was having SSL available. While I’m sure Van Hoet will “well actually” me on this, I never could figure out how to set it up ( not that I tried particularly hard ). The thing is now that I’m hosting on Linode I’m finding some really useful tutorials. This one showed me exactly what I needed to do to get it set up. Like any good planner I read the how to several times and convinced myself that it was actually relatively straight forward to do and so I started. ## Step 1 Creating the cert files Using [this tutorial](https://www.linode.com/docs/security/ssl/create-a-self- signed-certificate-on-debian-and-ubuntu "Creating Self Signed Certificates on Ubuntu")I was able to create the required certificates to set up SSL. Of course, I ran into an issue when trying to run this command `chmod 400 /etc/ssl/private/example.com.key` I did not have persmision to chmod on that file. After a bit of Googling I found that I can switch to interactive root mode by running the command `sudo -i` It feels a bit dangerous to be able to just do that (I didn’t have to enter a password) but it worked. ## Step 2 OK, so the tutorial above got me most(ish) of the way there, but I needed to sign my own certificate. For that I used this [tutorial](https://www.linode.com/docs/security/ssl/install-lets-encrypt-to- create-ssl-certificates "SSL"). I followed the directions but kept coming up with an error: `Problem binding to port 443: Could not bind to the IPv4 or IPv6` I rebooted my Linode server. I restarted apache. I googled and I couldn’t find the answer I was looking for. I wanted to give up, but tried Googling one more time. Finally! An answer so simple it couldn’t work. But then it did. Stop Apache, run the command to start Apache back up and boom. The error went away and I had a certificate. However, when I tested the site using [SSL Labs](https://www.ssllabs.com/ssltest/analyz… | 2017-12-15 | I’ve written about my migration from Squarespace to Wordpress earlier this year. One thing I lost with that migration when I went to Wordpress in AWS was having SSL available. While I’m sure Van Hoet will “well actually” me on this, I never could figure out how to … | Setting up the site with SSL | https://www.ryancheley.com/2017/12/15/setting-up-the-site-with-ssl/ |
so-you-want-to-give-a-talk-at-a-conference | ryan | technology | Last October I gave my first honest to goodness, on my own, up on the stage by myself talk at a tech conference. It was the most stressful yet fulfilling professional experience I've had. Fulfilling in that I've wanted to get better at speaking in public and this helped in that goal. Stressful in that I really wanted to do a good job and wasn't sure that I could, or worse, that anyone would care about what I had to say. Well, neither of those things turned out to be true. I did get a lot of good feedback which tells me I did a good job, and people were very encouraging for the words that I had to say, so people did care. My presentation went so well that I was even [interviewed by Jay Miller](https://youtu.be/WkeRI7LkBeY?si=gIgeMODD3aQJsfvX). You can see my actual talk [here](https://youtu.be/VPldDxuJDsg?si=r2ob3j4zIeYZY7tO), but I thought it would also be interesting for you to see how I got here. ## Submitting the idea I submitted my talk idea for DCUS 2023 in May and it was selected in June. That gave me roughly 3 months to get my loose outline of an idea into a 45 minute talk. ## Brain storming how the talk would go I have tried to get a better workflow for brainstorming ideas in general, but I really wanted to up my game for this talk. To that end I used the [Story Teller Tactics](https://pipdecks.com/pages/storyteller-tactics-card-deck) cards to help determine the path of the story I would tell in my presentation. That helped when I got to mind mapping1 my talk.  The use of the Story Teller Tactics, combined with my mind map, lead to a starting point for creating my presentation ## My 'Oh Sh%t moment' Back in early July I was browsing Mastodon (instead of working on my presentation) and came across a link to an article with the title [How To Become A Better Speaker At Conferences](https://www.smashingmagazine.com/2023/07/become-better-speaker- conferences/). I saved it to my read it later service and went on brow… | 2023-12-15 | Last October I gave my first honest to goodness, on my own, up on the stage by myself talk at a tech conference. It was the most stressful yet fulfilling professional experience I've had. Fulfilling in that I've wanted to get better at speaking in public and this helped in … | So you want to give a talk at a conference? | https://www.ryancheley.com/2023/12/15/so-you-want-to-give-a-talk-at-a-conference/ |
ssh-keys | ryan | technology | If you want to access a server in a 'passwordless' way, the best approach I know is to use SSH Keys. This is great, but what does that mean and how do you set it up? I'm going to attempt to write out the steps for getting this done. Let's assume we have two servers, `web1` and `web2`. These two servers have 1 non-root user which I'll call `user1`. So we have something like this * `user1@web1` * `user1@web2` Suppose we want to allow user1 from web2 to access web1. At a high level, we need to allow SSH access to web1 for user1 on web2 we need to: 1. Create `user1` on `web1` 2. Create `user1` on `web2` 3. Create SSH keys on `web2` for `user1` 4. Add the public key for `user1` from `web2` to onto the `authorized_keys` for for `user1` on `web1` OK, let's try this. I am using DigitalOcean and will be taking advantage of their CLI tool `doctl` To create a droplet, there are two required arguments.: * image * size I'm also going to include a few other options * tag * region * ssh-keys1 Below is the command to use to create a server called `web-t-001` doctl compute droplet create web-t-001 \ --image ubuntu-24-04-x64 \ --size s-1vcpu-1gb \ --enable-monitoring \ --region sfo2 \ --tag-name test \ --ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join(",")') and to create a server called `web-t-002` doctl compute droplet create web-t-002 \ --image ubuntu-24-04-x64 \ --size s-1vcpu-1gb \ --enable-monitoring \ --region sfo2 \ --tag-name test \ --ssh-keys $(doctl compute ssh-key list --output json | jq -r 'map(.id) | join(",")') The values for the `ssh-keys` above will get all of the ssh-keys I have stored at DigitalOcean and add them. The output looks something like: > \--ssh-keys 1234, 4567, 6789, 1222 Now that we've created two droplets called `web-t-001` and `web-t-002` we can set up user1 on each of the servers. I'll SSH as root into each of the serv… | 2024-07-13 | If you want to access a server in a 'passwordless' way, the best approach I know is to use SSH Keys. This is great, but what does that mean and how do you set it up? I'm going to attempt to write out the steps for getting this done. Let's … | SSH Keys | https://www.ryancheley.com/2024/07/13/ssh-keys/ |
ssl-finally | ryan | technology | I’ve been futzing around with SSL on this site since last December. I’ve had about 4 attempts and it just never seemed to work. Earlier this evening I was thinking about getting a second [Linode](https://www.linode.com) just to get a fresh start. I was _this_ close to getting it when I thought, what the hell, let me try to work it out one more time. And this time it actually worked. I’m not really sure what I did differently, but using this [site](https://certbot.eff.org/lets-encrypt/ubuntuxenial-apache) seemed to make all of the difference. The only other thing I had to do was make a change in the word press settings (from `http` to `https`) and enable a plugin [Really Simple SSL](https://really-simple-ssl.com) and it finally worked. I even got an ‘A’ from SSL Labs!  Again, not really sure why this seemed so hard and took so long. I guess sometimes you just have to try over and over and over again | 2018-04-07 | I’ve been futzing around with SSL on this site since last December. I’ve had about 4 attempts and it just never seemed to work. Earlier this evening I was thinking about getting a second [Linode](https://www.linode.com) just to get a fresh start. I was _this_ close to getting it … | SSL ... Finally! | https://www.ryancheley.com/2018/04/07/ssl-finally/ |
styling-cleanup | ryan | technology | I have a side project I've been working on for a while now. One thing that happened overtime is that the styling of the site grew organically. I'm not a designer, and I didn't have a master set of templates or design principals guiding the development. I kind of hacked it together and made it look "nice enough" That was until I really starting going from one page to another and realized that there styling of various pages wasn't just a little off ... but A LOT off. As an aside, I'm using [tailwind](https://www.tailwind.com) as my CSS Framework I wanted to make some changes to the styling and realized I had two choices: 1. Manually go through each html template (the project is a Django project) and catalog the styles used for each element OR 1. Try and write a `bash` command to do it for me Well, before we jump into either choice, let's see how many templates there are to review! As I said above, this is a Django project. I keep all of my templates in a single `templates` directory with each app having it's own sub directory. I was able to use this one line to count the number of `html` files in the templates directory (and all of the sub directories as well) ls -R templates | grep html | wc -l There are 3 parts to this: 1. `ls -R templates` will list out all of the files recursively list subdirectories encountered in the templates directory 2. `grep html` will make sure to only return those files with `html` 3. `wc -l` uses the word, line, character, and byte count to return the number of lines return from the previous command In each case one command is piped to the next. This resulted in 41 `html` files. OK, I'm not going to want to manually review 41 files. Looks like we'll be going with option 2, "Try and write a `bash` command to do it for me" In the end the `bash` script is actually relatively straight forward. We're just using `grep` two times. But it's the options on `grep` that change (as well as the regex used) that are what make the magic happen The first t… | 2021-10-26 | I have a side project I've been working on for a while now. One thing that happened overtime is that the styling of the site grew organically. I'm not a designer, and I didn't have a master set of templates or design principals guiding the development. I kind of hacked … | Styling Clean Up with Bash | https://www.ryancheley.com/2021/10/26/styling-cleanup/ |
switching-to-linode | ryan | technology | Switching to Linode I’ve been listening to a _lot_ of Talk Python to me lately ... I mean a _lot_. Recently there was a coupon code for Linode that basically got you four months free with a purchase of a single month, so I thought, ‘what the hell’? Anyway, I have finally been able to move everything from AWS to Linode for my site and I’m able to publish from my beloved Ulysses. Initially there was an issue with xmlrpc which I still haven’t fully figured out. I tried every combination of everything and finally I’m able to publish. I’m not one to look a gift horse in the mouth so I’ll go ahead and take what I can get. I had meant to document a bit more / better what I had done, but since it basically went from **not** working to working, I wouldn’t know what to write at this point. The strangest part is that from the terminal the code I was using to test the issue still returns and xmlrpc faultCode error of -32700 but I’m able to connect now. I really wish i understood this better, but I’m just happy that I’m able to get it all set and ready to go. Next task ... set up SSL! | 2017-12-03 | Switching to Linode I’ve been listening to a _lot_ of Talk Python to me lately ... I mean a _lot_. Recently there was a coupon code for Linode that basically got you four months free with a purchase of a single month, so I thought, ‘what the hell’? Anyway, I … | Switching to Linode | https://www.ryancheley.com/2017/12/03/switching-to-linode/ |
taking-down-the-rpi-camera-over-my-hummingbird-feeder | ryan | technology | As the temperature heats up it’s time to take down my hummingbird feeder. While the winds have cooled down the valley for the last few days, 100+ days are slowly creeping in and I need to take it down before the CPU melts. When I took it down last year I though, meh, how hard could it be to put back up. And then I put it back up in the Fall last year and had nothing but problems. This year, I wanted to document the wires and what not so that I can just put it back up once the temps cool down outside. Anyway, I could describe it or just take some pictures ... so here are some pictures for when I need to set it up again later this year. Above the feeder:  Wires to the sensor:  Wires to the GPIO pins:  | 2019-06-23 | As the temperature heats up it’s time to take down my hummingbird feeder. While the winds have cooled down the valley for the last few days, 100+ days are slowly creeping in and I need to take it down before the CPU melts. When I took it down last … | Taking Down the RPi Camera Over My Hummingbird Feeder | https://www.ryancheley.com/2019/06/23/taking-down-the-rpi-camera-over-my-hummingbird-feeder/ |
talk-python-build-10-apps-review | ryan | technology | Michael Kennedy over at Talk Python had a sale on his courses over the holidays so I took the plunge and bought them all. I have been listening to the podcast for several months now so I knew that I wouldn’t mind listening to him talk during a course (which is important!). The first course I watched was ‘Python Jumpstart by Building 10 Apps’. The apps were: * App 1: Hello (you Pythonic) world * App 2: Guess that number game * App 3: Birthday countdown app * App 4: Journal app and file I/O * App 5: Real-time weather client * App 6: LOLCat Factory * App 7: Wizard Battle App * App 8: File Searcher App * App 9: Real Estate Analysis App * App 10: Movie Search App For each app you learn a specific set of skills related to either Python or writing ‘Pyhonic’ code. I think the best part was that since it was all self paced I was able to spend time where I wanted to exploring ideas and concepts that wouldn’t have been available in traditional classrooms. Also, since I’m fully adulted it can be hard to find time to watch and interact with courses like this so being able to watch them when I wanted to was a bonus. Hello (you Pythonic) world is what you would expect from any introductory course. You write the basic ‘Hello World’ script, but with a twist. For this app you interact with it so that it asks your name and then it will output ‘Hello username my name is HAL!’ … although because I am who I am HAL wasn’t the name in the course, it was jut the name I chose for the app. My favorite app to build and use was the Wizard App (app 7). It is a text adventure influenced by dungeons and dragons and teaches about classes and inheritance an polymorphism. It ws pretty cool. The version that you are taught to make only has 4 creatures and ends pretty quickly. I enhanced the game to have it randomly create up to 250 creatures (some of them poisonous) and you level up during the game so that you can feel like a real character in an RPG. The journal application was interesting because I finally started to g… | 2018-03-20 | Michael Kennedy over at Talk Python had a sale on his courses over the holidays so I took the plunge and bought them all. I have been listening to the podcast for several months now so I knew that I wouldn’t mind listening to him talk during a course … | Talk Python Build 10 Apps Review | https://www.ryancheley.com/2018/03/20/talk-python-build-10-apps-review/ |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [content] ( [author] TEXT, [category] TEXT, [content] TEXT, [published_date] TEXT, [slug] TEXT PRIMARY KEY, [summary] TEXT, [title] TEXT, [url] TEXT );