Ahhh, love that new server smell. 😀
Ahhh, love that new server smell. 😀
There's a new version of WP Cron Pixie! 🎉
WP Cron Pixie 1.4.1 changelog:
Enjoyed attending #dttech last night, my first @DigitalTaunton event.
Liked the format of 2 main talks + 3 lightning talks after a break.
Great talks, and very well organised event, a lot of work must have gone into it, congrats to all! 👏
This week I released Snippet Pixie 1.3, with zero new features, but it took many many hours of development.
Umm, what? How could you bump Snippet Pixie up a version, and claim to have spent "many many hours" on development, but not have any new features?
And isn't this post titled "Snippet Pixie 1.3.1 Released"?
Ok, to some people that might not mean anything, but believe me, it's a major relief to have Snippet Pixie able to work with Electron based apps, and of course, Chrome and Chromium too.
Before v1.3, Snippet Pixie relied on apps using the ATK library, or at the very least, doing what's needed to work with the accessibility DBus interfaces that AT-SPI interacts with. Further still, the user had to be typing in an EditableText and Text interface compliant control. There are many Linux applications that are ATK/AT-SPI compliant, but unfortunately a few very popular ones are not.
The biggest trouble maker for Snippet Pixie was Chrome, turns out it's quite a popular browser. 😉
And many companies build cross platform apps with Electron, a technology that basically uses Chrome under the hood, so they weren't working well with Snippet Pixie.
I tried all kinds of things to try and get Chrome and Electron apps to play nice, and there was the odd occasion when a control here and there would actually work with the accessibility framework, but in the end I had to throw out the accessibility stuff and go "old skool".
Well, I say "throw out the accessibility stuff", but that's not quite true, and is the reason for v1.3.1, but we'll come to that in a minute.
Snippet Pixie still uses AT-SPI for monitoring keystrokes, but it's using a method that is a little more greedy in its scope, monitoring "all windows" for the current app rather than just the currently focused text control. With this method, Snippet Pixie can see when a key is pressed that matches the last character of some snippet's abbreviation, and then go look to see whether previously entered text matches an abbreviation.
This may sound less performant than monitoring the currently focused text control, and technically it likely is, but, it turns out that with this method I was able to eliminate a huge performance problem that earlier versions of Snippet Pixie suffered.
Snippet Pixie used to have to notice when the current app changed, then go look through all the widgets in that app to try and find the currently focused one, which hopefully was an editable text control. When that app happens to be a spreadsheet with thousands of widgets, that search for the currently focused control (cell) could take a noticeable amount of time. Now, we're not talking minutes here, not even seconds, but if it took even anything near half a second, you'd notice it.
Because Snippet Pixie is now monitoring at a higher level by default, it does not need to go look for a widget to attach monitoring code to. That's a big boost in performance, it no longer feels like Snippet Pixie is slowing down app switching for some apps.
Hang on a minute, if you no longer know where the text is being entered, how are you checking whether an abbreviation precedes that key stroke that matches the end of one or more abbreviations?
Blimey, you do ask some very intelligent questions don't you? 😉
So this is where a huge amount of time was spent in developing this release. When I first thought of building Snippet Pixie, one of the initial ideas I had for how to get the just entered text was to see if there was a way to walk back through the text with some sort of selection method, and then check the contents of the selection. I didn't get too far in that investigation before finding the accessibility interfaces, which seemed like a much saner approach.
Recently I found out about another open source text expander style app called Espanso. There was an issue raised on the project about the clipboard being overwritten when an expansion happened, or something like that, can't quite remember. But anyway, seeing that made me think that maybe I should take another look at my mad idea of selecting text, stuffing it into the clipboard and then checking the contents of the clipboard.
After much research and experimentation, I eventually managed to get Snippet Pixie to start a text selection by "pressing" the shift key and then "pressing and releasing" the left cursor key, then fake a copy shortcut, check the contents of the clipboard, and then keep hitting the left cursor key until an abbreviation was found, or the max length of known abbreviations was exceeded.
This kind of worked, and once I had an abbreviation match it was a cinch to then stuff the expanded text into the clipboard and fake a paste hotkey. Thanks for the initial code Clipped!
Grabbing the selected text into the clipboard wasn't ideal though. It was very flakey, using the copy shortcut for every extra character was hit and miss, and there was some weirdness with the clipboard not always updating.
I learnt a lot about the async mechanisms that the Linux desktop (GTK+ based in this case) clipboard uses, and that there's such a thing as a clipboard that most people think about, but also a second selection clipboard that auto populates when text is selected.
With that knowledge to hand, and after much experimentation with threads and strategic yielding and microscopic usleeps (I hate handling threads, but who doesn't?), I finally got a nice mechanism that recognises the abbreviation via selection, and then expands via the clipboard.
Over time I fine tuned the mechanism, and added a bunch of speed ups and error correcting retries to the way abbreviations are searched for, and ensured that the current clipboard is saved away and restored before and after the expansion respectively. The attached gif is intentionally slow so you can see the walk back working, even error correcting, but in general the mechanism is way quicker.
It worked really well with Chrome, Chromium and even Electron apps, but when I tested LibreOffice Writer, which had worked fine with the previous version of Snippet Pixie, it did the most weird things and lost text selection all the time, making abbreviation recognition impossible.
So I brought back the core abbreviation recognition engine of the old accessibility framework dependent mechanism, and sure enough, that still worked with LibreOffice Writer. Well, it wasn't quite that simple, I also had to add back a mechanism for being notified of when an accessible control was focused, and do a bunch of work to dynamically swap between the "all windows" monitoring method and the control based one as appropriate when apps are switched.
I did not need to re-add the old widget lookup code though, and I was able to improve the accessibility based mechanism with the abbreviation lookup speedups and threading from the new text selection mechanism.
In the end I got to a place where the previously supported applications still worked with the faster accessibility based text editing, and added support for a much wider set of applications via the new text selection mechanism.
When I applied those speed ups to the old mechanism that I created while developing the new one, I did miss one problem that necessitated v1.3.1. I used a slightly different monitoring mode with the re-implemented accessibility based code, async rather than blocking, kinda needed as part of the performance improvements and thread usage. Sometimes the control where the text was being entered would report the caret position incorrectly, especially near the beginning of the control, and after initial switch into the app. Turns out I needed to yield the abbreviation checking thread for a split second to let the app get itself together.
There are unfortunately a few apps that remain incompatible with Snippet Pixie 1.3.1.
The one that hurts the most for me is Firefox, my preferred web browser, which at some point in its last few releases started to misbehave when Snippet Pixie was in use. The old accessible method that Snippet Pixie used stopped working for some reason, something must have changed in Firefox to stop keystroke monitoring. I wouldn't be surprised if this was an intentional security measure, but I bet it also stops screen readers from working now.
When forcing use of the new text selection mechanism with Firefox, it keeps losing focus while you're typing, which breaks everything, text stops being entered. To get around this problem temporarily, until I can find a proper solution, or Firefox gets fixed, Snippet Pixie now blacklists Firefox so at least Firefox is usable, even if snippets don't expand.
Similarly, Snippet Pixie has never worked with terminal emulators, their text handling mechanism is fundamentally different. To avoid any potential issues with the new text selection mechanism that Snippet Pixie uses, a bunch of popular terminal emulators are now blacklisted by Snippet Pixie and it turns off abbreviation checking while they're the active application.
This new blacklisting mechanism actually brought to the fore another performance improvement for Snippet Pixie, as turning off the abbreviation checking speeds things up in incompatible apps. And along with this change, Snippet Pixie also monitors less keystrokes in general now too anyway, as if you're using either the
Ctrl or left
Alt key, as far as I can tell you can't be entering a character. The right
Alt Gr key is still monitored in combination with normal keys though as that's how you generally enter extended characters such as € and ¢.
The final incompatibility with Snippet Pixie 1.3.1 is Wayland compositors. This too hurts as I'm a fan of Sway, but alas the way I've had to implement active window / app checking is via libwnck, which unfortunately only supports X11.
I'm actively trying to find either an alternative robust method of tracking the current application that works in X11 and Wayland, or an additional Wayland only method that I can use when Wayland is detected.
So I guess you could maybe class the expanded compatibility and vastly improved performance as new features, but even if you don't, I'm sure you can see why I bumped the version from 1.2.x to 1.3.x.
Anyway, that was a bit of a rambling retrospective into what went into Snippet Pixie 1.3.1. Hope you enjoy this release, please submit bugs and feature requests in the GitHub repo.
FYI, if you try to circumvent paying for support for a client's product I work on by bombarding my personal email address, my contact page, or any other means that are not a proper support channel, I'm still not going to reply to your vague mess of a support request! 👿
boot.cleanTmpDir on NixOS can solve the most head-scratchy of head-scratchy things!
Yay, after a couple of days painting the landing I'm finally at podcast zero! 🎉 (there may have been a good bit of podcast culling and skipping too)
I just wrote about How I Use tmux For Local Development across on the Delicious Brains blog.
If you use the command line at all, you should check it out!
Hello, my name is Ian Jones, and this is the RustyElm microcast, a short, very ad-hoc journal of my attempt to learn Rust and Elm in order to build a side project.
So, I've finally had some time over the last week or so to get stuck into my Rust and Elm project, starting by trying to get the Rust based server up and running.
Getting a server up and running with a simple "Hello World" is a doddle with Rocket, and I had fun playing with the basics of request guards etc. to easily handle calls to /hello/
However, I then made the mistake of jumping into taking a look at how to use the Juniper crate along with Rocket for creating a GraphQL server. Things very quickly came unstuck as I tried to work through a seemingly endless amount of errors that frankly made no sense to me as I have very little experience with the Rust compiler. I tried following the quick start guide but even that had problems when implemented in Rocket with slight modifications as per the integration docs, and its example on GitHub. The problems seemed to be related to dependencies, I could never get the derive macros to work properly.
Eventually I came to my senses and decided that maybe I should attempt setting up a couple of simple REST routes that return JSON first, that's built into Rocket, and in the simple case doesn't need hardly any dependencies.
It worked fine, changing the existing play routes to return JSON was easy. Tiny steps for the win!
Then it was time to connect up a database and grab some data from it. I thought it would be relatively simple, as again, I was going to use functionality that is effectively built into Rocket for accessing a database via the Diesel ORM.
I have some data I can use from an existing prototype of the application I'm building, the prototype currently uses an SQLite database and I have a couple of shell scripts and accompanying SQL scripts built that can extract the data and import it into a CockroachDB database. I want to use CockroachDB because it uses the PostgreSQL wire protocol and is mostly SQL compatible with PostgreSQL too, so is easy to use with most languages and frameworks, but can easily scale and have data GEO partitioned. I don't need that scale out capability or GEO partitioning just now, but seeing as Cockroach is a breeze to set up and frankly just cool to play with, I'm going to use it. Have I mentioned that this project is all about learning how to use technologies that I find very interesting and see a great future for?
Anyway, installing the Diesel CLI was dead easy, a simple cargo install diesel_cli and we were done as I had already made sure I had the required PostgreSQL client bits and bobs installed as per the Quick Start Guide. And the basic setup of its dotenv file along with similar configuration in Rocket.toml was a no-brainer too. A quick run of diesel setup and I had the expected new migrations directory and schema.rs file in my project.
Before getting stuck into using Diesel with Rocket I took a detour to make a few tweaks to my data migration scripts to use UUID columns instead of plain integer style serial columns. It's highly recommended when data can be written to multiple nodes in a cluster, like with CockroachDB, to use random primary keys as it prevents data writes from being clumped together in its range based data setup. However, using UUIDs turned out to cause more dependency headaches further down the line that required some more quick learning, but we'll come to that in a second.
Diesel's migrations feature allowed me to convert the schema definition bits from my data migrations scripts into proper migration steps. The first step to set up a basic schema with extra "old ID" fields in tables, the next step being used for data import so I can re-create it easily from the SQLite database, and a follow on migration that then fixed up the data to convert the pre-existing integer based primary/foreign key pairs to use the new UUID setup. That last step also dropped the "old ID" fields on completion.
Then came the task of creating structs in the code that could be hooked up to the schema with derive(Queryable) macros etc. These macros allow the Diesel ORM to build queries and with the right use statements the fields from the structs can easily be used in code to form predicates. This turned out to be another source of Dependency Hell, largely due to the UUID and TIMESTAMP columns in the schema requiring new crates that exposed the underlying problem I had before.
There were a lot of errors related to traits not being satisfied one way or another, and infuriatingly it would often show trait signatures that exactly matched what I saw in the docs for the UUID or chrono crates that I was using. Every time I searched Stackoverflow or other sources for anything related, they always said that dependencies hadn't been properly added to Cargo.toml or as use statements, and then showed me exactly what I already had. Eventually I pieced together enough information to realise that if the same crate is imported as a dependency by a couple of crates, and/or explicitly by your code, if they happen to import slightly different versions that seemingly have exactly the same function signatures etc, just by being even a debug level different, Rust will consider the traits completely different and incompatible.
I used the excellent cargo-tree plugin to inspect the dependency tree and saw these tiny differences in crate dependencies. At first I was at a bit of a loss as to what to do, then on a whim I decided to relax the version numbers I'd used in the dependencies sections of the Cargo.toml file. So instead of setting Rocket to be version 0.4.2, I instead used version 0.4, and so on for all the other dependencies. And boom, just like that, cargo test compiled and ran! I had previously set tight versions to try and match what I'd seen in the dependency tree, slightly coaxing versions I had control over to match the versions I saw as automatic imports. That was around the wrong way, a better strategy was to relax the versions a little so that Cargo had wiggle room and could recognise satisfied dependencies based on slightly looser version requirements. Not quite sure where my head was on that one, it's obvious in retrospect, I blame late night programming!
With that I was able to get a couple of routes set up to return information in JSON format from the database based on UUID or other field types I played with. I even set things up so that custom JSON errors were returned when supplied query params weren't as expected and so on.
I've seen it mentioned somewhere before that one of the hardest hurdles to get over when starting with languages like Rust is simply getting past the endless stream of compile time errors you encounter when first trying to find your feet. I very much felt that pain over the last few days, and I expect it's one of the reasons Elm tries so hard to have nice helpful compiler errors that try to suggest fixes. To be fair, in a lot of cases there were helpful hints from the Rust complier too, there were times where I knocked out a big bunch of errors by following the tips it showed.
So I now had the basics of a REST-API server set up, albeit querying a single database table, but alas my GitLab CI pipeline was failing due to the new database stuff. How I got that fixed up and cut down from a run time of 26 minutes to under 3 minutes is a story for another day as I've run out of time.
As always, you can find me on...