The search for XOAUTH2 for notification emails
I've been using sSMTP with Gmail for a few years to be notified of cron job failures and automatic bans. It's reasonably easy to set up, but I wanted to improve the security. Through a series of dead-ends, I eventually found myself setting up my own outgoing mail server, and found it pretty easy once shifting through all the noise.
The Appeal
I've had two factor authentication enabled on my Gmail account since 2011. Thus I'm familiar with "App Passwords:" Google-generated passwords that are limited to a few services (like SMTP and IMAP), but can't directly log in via the web interface.
App Passwords have always been a last resort and so there's a drive to replace them with more secure methods like OAuth, but it's a slow process. I care about the security of my Gmail account and am interested in "best practices," so I'd been looking to migrate to XOAUTH2 for a few years now. With something like OAuth, you can restrict usage to just certain operations, like sending email.
During a recent trip to my Google account settings I noticed the App Passwords still there, and renewed an interest to replace them.
As is probably true for many people, if someone got access (even just read access) to my Gmail they could do a lot of harm. They could use the "I forgot my password" feature on many websites to reset passwords, see the confirmation email send to my inbox, and then change the password to something they know.
A long way to an unexpected dead-end
It's been several years since I had last investigated, so I hoped for some progress. I was only to be disappointed.
I first checked whether sSMTP supported them, but no, it only supports LOGIN and CRAM-MD5. So I then searched to figure out if there was any work to add it, or what problems had been encountered.
What I found was that sSMTP is basically unmaintained and a weird project to begin with, as it never really evolved out of its Debian maintenance. It's official webpage is the Debian package tracker, the bug tracker is the Debian bug tracker, and the source repository is the Debian package source repository. It does seem various people exchange patches as necessary, but it isn't close to vibrant development.
I searched for alternative /usr/bin/sendmail
replacements, but
they all seemed to be much bigger, full-blown mail transfer agents (MTA) like
sendmail itself. I wanted something small and simple. And it didn't seem they
supported XOAUTH2 to boot.
Back on sSMTP, I found an interesting Git repo, but it didn't yield too much in the end. And I also found a bug, with a patch attached, to add SASL support. If an application supports a SASL library, the SASL library is able to add new authentication mechanisms without needing to modify the application. So this could be a way to get XOAUTH2.
So I searched for SASL support for XOAUTH2. I basically only found cyrus-sasl-xoauth2 . But it requires having a file with the OAuth tokens stored in it! Those tokens expire after an hour or so, so this proved to be near useless. There's not even SASL support for XOAUTH2.
I considered writing the support myself. The annoying part would be the OAuth token. I'd need to find a library to help me retreive it in C, and I'd probably want to cache the OAuth token as a file. Basically, the same part the SASL implementation avoided.
I also considered writing my own SMTP client. Doing it in either Python or Go seemed relatively easy. Neither library supports XOAUTH2, but it'd be just a bit of glue code to combine with existing OAuth APIs. But this also required making a mailx/sendmail emulating executable, which would be tedious due to the number of flags.
And then I stumbled on the realization that XOAUTH2 provides little
improvement.
SMTP access requires the https://mail.google.com/
scope
(https://www.googleapis.com/auth/gmail.send
is insufficient), and
that provides
Full access to
the account. Even with XOAUTH2 I can't limit access to just sending email!
So it's not worth any effort for me. I'm sure glad I didn't realize that after
a bunch of coding!
Alternatives
I considered writing my own mailx/sendmail emulation executable that used the Google-native mail sending API. This is basically equivalent to one of the earlier ideas, but using a different protocol which would only need the "send email" scope. The OAuth token wouldn't be able to read my email. But this would have the same annoyance of needing to make a mailx/sendmail emulating executable. I was also beginning to consider the damage that someone could do with just email sending access to my account.
I realized this was all a massive workaround. I really wanted a solution where the machines had no access to my Gmail account. The normal solutions here would be to:
- Use the SMTP server provided by my ISP. I think this would have been possible 2 years or so ago, but it doesn't appear I can set this up any more.
- Make a robot Gmail account. Just a trash account that does nothing but send me these emails. This would have been easy, but I have an aversion to creating trash accounts and wanted a "real" (or maybe "pure") solution.
- Pay a provider. This would have been fine, but in 5 minutes of searching didn't see any super-cheap providers (say, less than $5 a month), and it is little different from the robot Gmail account.
But I instead went the seemingly-crazy route of hosting my own SMTP server. And it turned out not be so bad.
Google Authenticator
I have been happily using two-factor authentication with my Google account for months. I appreciate the added security and it hasn't been much hastle.
I decided it would be a good idea to implement something similar for access to my server. There are several options available, but I selected using pam_google_authenticator. It integrates with the Google Authenticator phone application and supports backup OTPs for when your phone is unavailable.
Since I am using Arch, the process begins with installing the
google-authenticator-libpam-hg
package from AUR. Normally this would be
an easy task, but for some reason hg clone
fails during building. I
worked around the problem by running hg clone
https://code.google.com/p/google-authenticator
command manually in my
home directory, and then creating a symlink to it for use in the build
script. I also installed qrencode
for generating QR codes.
Now that it is installed, you have to configure PAM to make use of the new
module. I created a new file /etc/pam.d/google-authenticator
with
the contents:
#%PAM-1.0 auth sufficient pam_access.so accessfile=/etc/security/access-local.conf auth sufficient pam_succeed_if.so user notin some:users:here auth required pam_google_authenticator.so
The pam_google_authenticator
module does the real work, but
there are only two cases that I want to require the OTP. I want to require the
OTP for all connections from the Internet, but not my LAN. Thus
pam_access
, with the help of additional configuration, does just
that. When turned on, pam_google_authenticator requires all users to use OTP
with no provision for users who haven't setup their two-factor authentication
yet (it would simply prevent them from logging in). There are several patches I
could have applied to fix this problem, but I just went with the simple approach
of manually configuring the list of users I want to use two-factor
authentication with the pam_succeed_if
module.
some:users:here
is a colon-separated list of users that will be
using two-factor authn.
For pam_access, I created /etc/security/access-local.conf
:
+ : ALL : 10.0.0.0/24 + : ALL : LOCAL - : ALL : ALL
The first line is where you define your network's subnet. It should likely be
something like 192.168.1.0/24
.
To allow PAM to query additional information via SSH, you need to make sure
that ChallengeResponseAuthentication
is not set to no
in /etc/ssh/sshd_config
. The default is yes
, but in
Arch they set it to no
, so I just commented out that line in the
config and restarted SSH.
As my normal user, I ran google-authenticator
which generated a
TOTP secret in my home directory. (Assumably) since I had qrencode installed, it
also provided a very nice QR code (in the termal even!) that I scanned with my
phone to configure the Google Authenticator Android application.
All the preparation work is complete, I now need to enable the setup for ssh.
In /etc/pam.d/sshd
I added a line under auth required
pam_unix.so ...
:
auth substack google-authenticator
After a bit of testing, I verified everything was running as I expected and
I now have two-factor authentication for accessing my server via SSH. To
enable two-factor for additional accounts I will have the account user run
google-authenticator
and setup their phone, after which I will add
them to the list passed to pam_succeed_if.
XSLT with Python
I just thought it had been a long time since my last post. This one wins even more.
I still really like Lighttpd and until recently was only using Apache for mod_svn and mod_xslt. I don't have much choice with mod_svn short of using svnserve (which I may end up doing), but a few months ago (December 12 by the file date) I took up the challenge of replacing mod_xslt.
I did enjoy mod_xslt and can't complain about its performance or memory usage. The fact that the project is dead is disconcerting, but any time the module stops compiling I'm able to get it working again by looking around, posting on the mailing list, or fixing it myself. Really, the only real qualm I have is that it requires Apache.
As an aside, my love for XML has long since passed and so I just want the system to work and I won't make any future enhancements. In general, I am now anti-XML and pro-JSON and -Bencode. My opinion is that there are still uses for XML, but that it is generally overused.
After some time, I developed this CGI script in Python:
#!/usr/bin/env python from lxml import etree import cgi import sys, os KEY="SOMESECRETKEY" def transform(xml, xslt): doc = etree.parse(xml) doc.xinclude() style = etree.XSLT(etree.parse(xslt)) return style.tostring(style.apply(doc)) if __name__ == "__main__": import cgitb cgitb.enable() form = cgi.FieldStorage() if "key" not in form or form["key"].value != KEY: print "Status: 403 Forbidden" print "Content-Type: text/html" print print "<html><title>403 Forbidden</title><body><h1>403 Forbidden</h1></body></html>" sys.exit() xml = form["xmlfile"].value xslt = form["xsltfile"].value contenttype = form["contenttype"].value print "Content-Type:", contenttype print print transform(xml, xslt)
Luckily I didn't use very many mod_xslt specific features, so everything seemed to "just work." I did lose Content-Type support, so I have to hard-code it as a GET parameter. Notice I added the secret key in there since I didn't want to bother with proper security.
Now for the Lighttpd configuration. Since I can no longer use .htaccess files in different directories to change which XSLT is used, I get this more-ugly config:
url.redirect = ( "^/$" => "/recent/" ) url.rewrite-once = ( "^(/recent/rss/)(?:index.html|index.xml)?$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/application-rss%%2Bxml.xsl&contenttype=application/xml&xmlfile=../$1/index.xml", "^(/recent/atom/)(?:index.html|index.xml)?$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/application-atom%%2Bxml.xsl&contenttype=application/atom%%2Bxml&xmlfile=../$1/index.xml", "^(/recent/atom/summary/)(?:index.html|index.xml)?$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/application-atom%%2Bxml.summary.xsl&contenttype=application/atom%%2Bxml&xmlfile=../$1/index.xml", "^(/recent/atom/0\.3/)(?:index.html|index.xml)?$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/application-atom%%2Bxml.0.3.xsl&contenttype=application/xml&xmlfile=../$1/index.xml", "^((?:/recent/|/archive/).*).(?:html|xml)$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/text-html.xsl&contenttype=text/html&xmlfile=../$1.xml", "^((?:/recent|/archive)/(?:.*/)?)$" => "/cgi-bin/ejona-xslt.py?key=SOMESECRETKEY&xsltfile=/path/to/htdocs/shared/xsl/text-html.xsl&contenttype=text/html&xmlfile=../$1/index.xml", ) index-file.names = ( "index.xml" )
Notice the %%2B's in some of the URLs. Those make it additionally ugly, but I still prefer that stuff over dealing with Apache.
All-in-all, it feels like a reasonably hackish solution, but it works great. I don't care about loss in performance (honestly, who reads a not-updated-in-over-two-years blog?) and if I really care I could convert the script into a Fast-CGI on WSGI script. It is nice to know that this proof-of-concept of a blog is somewhat portable now.
Gtk, Languages, and Memory
It has been a long time since my last post. Sorry, but now to business.
Recently I have re-tried to open up to Mono. I certainly do like some of its abilities and features, but the memory usage always worries me. When I compared the memory usage of Banshee and Rhythmbox, I found memory utilization of 80 MiB and 20 MiB, respectively. That mostly shot the Mono idea out of the water. However, then I checked Tomboy memory usage and found it to be reasonable, but still a little high at 16.7 MiB on my computer with three notes. For reference, Pidgin uses 6.9 MiB and Nautilus uses 12.0 MiB.
Then I got to wondering about the baseline memory usage for using GTK+ for different languages. Here are the languages, files, and commands I used.
Languages, Files, and Commands
C
hello-world.c:
#include <gtk/gtk.h> static void on_destroy (GtkWidget * widget, gpointer data) { gtk_main_quit (); } int main (int argc, char *argv[]) { GtkWidget *window; GtkWidget *label; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_window_set_title (GTK_WINDOW (window), "Hello World"); g_signal_connect (G_OBJECT (window), "destroy", G_CALLBACK (on_destroy), NULL); label = gtk_label_new ("Hello, World"); gtk_container_add (GTK_CONTAINER (window), label); gtk_widget_show_all (window); gtk_main (); return 0; }
compile:gcc hello-world.c -o hello-world `pkg-config --libs --cflags gtk+-2.0`
run:./hello-world
C++
hello-world.cpp:
#include <gtkmm/main.h> #include <gtkmm/window.h> #include <gtkmm/label.h> class HelloWorld : public Gtk::Window { public: HelloWorld(); virtual ~HelloWorld(); protected: Gtk::Label m_label; }; HelloWorld::HelloWorld() : m_label("Hello, World") { set_title("Hello World"); add(m_label); show_all(); } HelloWorld::~HelloWorld() { } int main (int argc, char *argv[]) { Gtk::Main kit(argc, argv); HelloWorld helloworld; Gtk::Main::run(helloworld); return 0; }
compile:g++ hello-world.cpp -o hello-world `pkg-config --libs --cflags gtkmm-2.4`
run:./hello-world
Python
hello-world.py:
import gtk def on_destroy(o): gtk.main_quit() w = gtk.Window() w.set_title("Hello World") w.connect("destroy", on_destroy) l = gtk.Label("Hello, World") w.add(l) w.show_all() gtk.main()
run:python hello-world.py
C#
hello-world.cs:
using Gtk; using System; class Hello { static void Main() { Application.Init (); Window window = new Window ("Hello World"); window.DeleteEvent += delete_event; window.Add (new Label ("Hello, World")); window.ShowAll (); Application.Run (); } static void delete_event (object obj, DeleteEventArgs args) { Application.Quit (); } }
compile:mcs hello-world.cs -pkg:gtk-sharp
run:mono hello-world.exe
IronPython
hello-world.py:
import clr clr.AddReference("gtk-sharp") import Gtk def delete_event (o, args): Gtk.Application.Quit () Gtk.Application.Init () w = Gtk.Window ("Hello World") w.DeleteEvent += delete_event l = Gtk.Label ("Hello, World") w.Add(l) w.ShowAll () Gtk.Application.Run ()
run:mono ipy.exe hello-world.py
Results
All the memory usages were recorded from GNOME's System Monitor. I used the new "Memory" column that is suppose to be less misleading than other measurements.
Language | Memory Usage |
---|---|
C | 1.9 MiB |
C++ | 2.7 MiB |
Python | 6.6 MiB |
C# | 3.3 MiB |
IronPython | 29.8 MiB |
I tried to do Java as well, but I could not get the Java bindings to compile. I think that this test gave Mono some credibility and removed any consideration for using IronPython from my mind. I was not very surprised with Python's "high" memory usage, since I had already looked into this when looking into GEdit's plugins. This test tells nothing about actual memory usage in typical programs in each of the different languages, just the baseline for how low the memory usage can go.
Avahi and DNS
I was playing around with Avahi and DNS and decided that since it was so much fun, I should share my experience.
I already had Avahi set up. Avahi is a Zeroconf (aka Bonjour, Rendezvous) implentiation. Zeroconf allows you to find services on a network. Previously I just used Avahi for finding iTunes shares (with Rhythmbox) and publishing my services (ssh, http, ipp, rsync, etc.).
The Beginning
Previously I noticed, with avahi-discover
, that my workstation was published (because public-workstation is enabled by default). I had no idea how to use it though. Looking through Planet GStreamer for the first time I found a post describing how to use this other tid-bit of information.
Getting Automatic Local Hostnames
Up to this point, I had been setting hostnames in /etc/hosts
for computers on my network. My network maintains fairly stable IPs, so this was not a big issue. But with Avahi, this can be automatic! I emerged nss-mdns
and added mdns (most people probably want mdns4 instead) to /etc/nsswitch.conf
the hosts line (now looks like "hosts: files dns mdns").
DNS Caching and Local Domain Setting
At this point, I could go to my machine via mastermind.local, and the ersoft server via wife.local. As I removed wife and mastermind from my hosts file, I realized that dns lookups were much slower when gotten from avahi. Comcast's DNS servers are unbearably slow, so I figured this would be a good time to set up a DNS cache.
I found a post for DNS caching for Ubuntu. Since I already had dnsmasq, I uncommented the listen-address line in /etc/dnsmasq.conf
and set it equal to 127.0.0.1 ("listen-address=127.0.0.1"). Then I ran rc-update add dnsmasq default
and /etc/init.d/dnsmasq start
.
To configure your system to use this cache you need to modify /etc/dhcp/dhclient.conf
. It is very possible that you are missing this file. If you are, just emerge dhcp
.
The file is the same place that you can set your default domain name. Setting the domain name allows you to connect to a host via hostname
as opposed to hostname.domainname
. In my case, without the default domain name set, I would have to connect to mastermind.local
to get to my laptop. For most people, their domain name would be local as well. I my dhclient.conf now looks like:
prepend domain-name-servers 127.0.0.1; supersede domain-name local;
If you were previously using dhcpcd in Gentoo, you will want to change /etc/conf.d/net
to use dhclient. You can achieve this by:
modules=( "dhclient" )
You will need to restart your network device before it will be using the new configuration. If you want to test to see if everything is working, dig
is a useful command. It is part of bind-tools
. Give dig the argument of the host you want to lookup. It will give you a lot of good DNS information, including "Query time." The query time from the cache should be 0-1 milliseconds.