1
0
Fork 0

Compare commits

..

15 commits

20 changed files with 3 additions and 1133 deletions

4
.gitignore vendored
View file

@ -1,3 +1,5 @@
.clj-kondo
.lsp
.calva
.calva
gemini
docs

View file

@ -1,101 +0,0 @@
~~~~~~~~
title: Solving app_data or ReqData missing in requests for actix-web
## date: 2022-03-26
> This post is day 5 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
Im using actix-web to set up a web server, and Ive been hitting a small problem that I think other people may come across too.
To explain the problem, let me talk a bit about my setup. I have a custom middleware that checks if a user is authorized to access a route. It looks like this:
```impl<S: 'static, B> Service<ServiceRequest> for CheckLoginMiddleware<S>
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = Error>,
S::Future: 'static,
{
type Response = ServiceResponse<EitherBody<B>>;
type Error = Error;
type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>;
dev::forward_ready!(service);
fn call(&self, req: ServiceRequest) -> Self::Future {
let state = self.state.clone();
let (request, payload) = req.into_parts();
let service = self.service.clone();
let user_token = get_token_from_header(&request);
let path_token = if self.allow_path_tokens {
get_token_from_query(&request)
} else {
None
};
Box::pin(async move {
match verify_auth(state, user_token, path_token, request.path()).await {
Ok(authorized) => {
tracing::debug!("Request authorized, inserting authorization token");
// This is the "important bit" where we insert the authorization token into the request data
request.extensions_mut().insert(authorized);
let service_request =
service.call(ServiceRequest::from_parts(request, payload));
service_request
.await
.map(ServiceResponse::map_into_left_body)
}
Err(err) => {
let response = HttpResponse::Unauthorized().json(err).map_into_right_body();
Ok(ServiceResponse::new(request, response))
}
}
})
}
}
```
The verify_auth function is omitted, but the gist of it is that it returns an Result<Authorized, Error>. If the user is authorized, the authorization token verify_auth returned is then attached to the request.
Then heres how I use it in a path:
```#[delete("/{store}/{path:.*}")]
async fn delete_storage(
params: web::Path<(String, String)>,
// This parameter is automatically filled with the token
authorized: Option<ReqData<Authorized>>,
) -> Result<HttpResponse, StorageError> {
let (store, path) = params.as_ref();
let mut store_path = get_authorized_path(&authorized, store)?;
store_path.push(path);
if fs::metadata(&store_path).await?.is_file() {
tracing::debug!("Deleting file {:?}", store_path);
fs::remove_file(&store_path).await?;
} else {
tracing::debug!("Deleting folder {:?}", store_path);
fs::remove_dir(&store_path).await?;
}
Ok(HttpResponse::Ok().finish())
}
```
This setup worked for this path, but would absolutely not work for another path. I inserted logs to track everything, and just found that the middleware would insert the token, but the path would just get None. How‽ I tried to slowly strip everything away from the non-functional path until it was identical to this one, but it still would not work.
Well it turns out the solution was very simple, see this:
```use my_package::storage::put_storage;
use crate::storage::delete_storage;
```
Ah! They are imported differently. I had set up my program as both a library and a program for various reasons. However, it turns out importing the same thing from crate is different from importing it from the library. Because of the difference in import, Actix doesnt recognize that the types match, so the route cant access the attached token.
The solution is normalizing the imports. I went with going through the library for everything, because thats what rust-analyzers automatic import seems to prefer.
```use my_package::storage::{put_storage, delete_storage};
```
Solved!

View file

@ -1,11 +0,0 @@
~~~~~~~~
title: Writing a Program in Bash
## date: 2015-04-12
I dont really know why, but writing code in Bash makes me kinda anxious. It feels really old, outdated, and confusing. Why cant a function return a string? And no classes, or even data types? After getting confused, usually, I just end up switching to Python. <!--more--> But this time, I decided to stick with Bash. And I am surprised. It is unbelievebly good. I must say, now I understand the Unix philosophy much better. Having small programs doing one thing very good allows you to combine the power of those programs in your scripts. You think your favourite programming language has a lot of libraries? Well, bash has access to more. The entire Unix ecosystem powers bash. Converting videos, taking screenshots, sending mails, downloading and processing pages; there are already command line tools for all of that, and you have great access to all of them.
The program Ive started writing is called WoWutils[a]. And Im still shocked at just how much functionality I have added with so little code. If you are considering writing a program in Bash too, just go through with it. It really is very powerful.
=> https://github.com/SeriousBug/WoWutils [a]

View file

@ -1,21 +0,0 @@
~~~~~~~~
title: “Black Crown Initiate”
## date: 2022-04-02
> This post is day 9 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
I love metal, Ive been listening to metal since I was 13. It was the first music genre that I actually liked: until I discovered metal I actually thought I didnt like music at all, because nothing I heard on the radio or heard my friends listening to were interesting to me. My taste in music has expanded and changed over the years to include different types of music and genres, but metal remains the one I love the most.
Demonstrating my metal-worthiness aside, Ive always listened to European metal bands. I had this weird elitist thought that “good” metal could only come from Europe, with exceptions for some non-European bands, and that American metal was just always bad. This is obviously false, but I just had never came across anything American that I had liked. Thats until recently.
I recently came across Black Crown Initiate[a], a progressive death metal band from Pennsylvania. And I have to tell you that they are amazing.
=> https://www.metal-archives.com/bands/Black_Crown_Initiate/3540386765 [a]
Their first release “Song of the Crippled Bull” is absolutely amazing. The music is just the right amount of metal and progressive, and lyrics are amazing. The clean vocals get the themes of the song across, while the growls give a lot of power to the songs. My favorite songs from this release are “Stench of the Iron Age” and the title track “Song of the Crippled Bull”. Other hightlights from the band Ive listened to so far include “A Great Mistake”, “Death Comes in Reverse”, “Vicious Lives”.
Im still making my way through their songs, but Im glad to have discovered something from America that I absolutely love. Im now trying to find more non-European bands that I enjoy.

View file

@ -1,33 +0,0 @@
~~~~~~~~
title: An introduction to Bulgur Cloud - simple self hosted cloud storage
## date: 2022-03-29
> This post is day 8 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
Ive been recently working on Bulgur Cloud, a self hosted cloud storage software. Its essentially Nextcloud, minus all the productivity software. Its also designed to be much simpler, using no databases and keeping everything on disk.
The software is still too early to actually demo, but the frontend is at a point where I can show some features off. So I wanted to show it off.
=> /img/2022-03-29-00-17-38.png The login screen
Ive been going for a clean “print-like” look. I think its going pretty well so far.
=> /img/2022-03-29-00-16-13.png The directory listing
Im not sure about the details of how the directory listing will look. I dont think I like the upload button in the corner, and the rename and delete icons feel like they would be easy to mis-press. There is a confirmation before anything is actually deleted, but it still would be annoying.
=> /img/2022-03-29-00-20-48.png Delete confirmation prompt
Something Im pretty happy with is the file previews. Ive added support for images, videos, and PDFs. Video support is restricted by whatever formats are supported by your browser, the server doesnt do any transcoding, but I think its still very useful for a quick preview. Im also planning on support for audio files. The server supports range requests, so you can seek around in the video without waiting to download everything (although Ive found that Firefox doesnt handle that very well).
=> /img/2022-03-29-00-22-48.png Video file preview
This is a web interface only so far, but Im planning to add support for mobile and desktop apps eventually. Ive been building the interface with React Native so adding mobile/desktop support shouldnt be too difficult, but Ive been finding that “write once, run everywhere” isnt always that simple. I ended up having to add web-only code to support stuff like the video and PDF previews, so Ill have to find replacements for some parts. Mobile and desktop apps natively support more video and audio formats too, and with native code you usually have the kind of performance to transcode video if needed.
The backend is written in Rust with actix-web, using async operations. Its incredibly fast, and uses a tiny amount of resources (a basic measurement suggests < 2 MB of memory used). Im pretty excited about it!
After a few more features (namely being able to move files), Im planning to put together a demo to show this off live! The whole thing will be open source, but Im waiting until its a bit more put together before I make the source public. The source will go live at the same time as the demo.

View file

@ -1,58 +0,0 @@
~~~~~~~~
title: Emacs and extensibility
## date: 2015-10-06
Update: Ive put the small Emacs tools I have written to a gist[a].
=> https://gist.github.com/91c38ddde617b98ffbcb [a]
I have been using Emacs for some time, and I really love it. The amount of power it has, and the customizability is incredible. What other editor allow you to connect to a server over SSH and edit files, which is what I am doing to write this post. How many editors or IDEs have support for so many languages?
```<!--more-->
```
One thing I didnt know in the past, however, is extensibility of Emacs. I mean, I do use a lot of packages, but I had never written Elisp and I didnt know how hard or easy it would be. But after starting to learn Clojure a bit, and feeling more comfortable with lots of parenthesis, I decided to extend Emacs a bit to make it fit myself better.
The first thing I added is an “insert date” function. I use Emacs to take notes during lessons -using Org-mode- and I start every note with the date of the lesson. Sure, glancing at the date to the corner of my screen and writing it down takes just a few seconds, but why not write a command to do it for me? Here is what I came up with:
```(defun insert-current-date ()
"Insert the current date in YYYY-MM-DD format."
(interactive)
(shell-command "date +'%Y-%m-%d'" t))
```
Now that was easy and convenient. And being able to write my first piece of Elisp so easily was really fun, so I decided to tackle something bigger.
It is not rare that I need to compile and run a single C file. Nothing fancy, no libraries, no makefile, just a single C file to compile and run. I searched around the internet like “Emacs compile and run C”, but couldnt find anything. I had been doing this by opening a shell in Emacs and compiling/running the program, but again, why not automate it?
The code that follows is not really good. “It works” is as good as it gets really, and actually considering that this is the first substantial Elisp I have written, that is pretty impressive -for the language and Emacs, which are both very helpful and powerful- I think.
```(require 's)
(defun compile-run-buffer ()
"Compile and run buffer."
(interactive)
(let* ((split-file-path (split-string buffer-file-name "/"))
(file-name (car (last split-file-path)))
(file-name-noext (car (split-string file-name "[.]")))
(buffer-name (concat "compile-run: " file-name-noext))
(buffer-name* (concat "*" buffer-name "*")))
(make-comint buffer-name "gcc" nil "-Wall" "-Wextra" "-o" file-name-noext file-name)
(switch-to-buffer-other-window buffer-name*)
(set-process-sentinel (get-buffer-process (current-buffer))
(apply-partially
'(lambda (prog-name proc even)
(if (s-suffix? "finished\n" even)
(progn
(insert "Compilation successful.\n\n")
(comint-exec (current-buffer) prog-name (concat "./" prog-name) nil nil))
(insert (concat "Compilation failed!\n" even))))
file-name-noext))))
```
Again, the code is not really good. Im uploading it here right now because Im actually very excited that I wrote this. Just now I can think of ways to improve this, for example moving the compiler and the flags to variables so that they can be customized. I could also improve the presentation, because strings printed by this function, comint and the running programs mixes up. Ill update this blog post if I get to updating the code.
If this is your first time hearing about Emacs, this post may look very confusing. I dont to Emacs any justice here, so do check it out somewhere like Emacs rocks[a]. On the other hand, if you have been looking a functionality like this, hope this helps. If you have any suggestions about the code, Id love to hear them, you can find my email on the “about me” page. Anyway, have a good day!
=> http://emacsrocks.com/ [a]

View file

@ -1,38 +0,0 @@
~~~~~~~~
title: Do kids not know computers now?
## date: 2022-03-28
> This post is day 7 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
One discussion point Ive seen around is that kids nowadays dont know how to use computers. Okay thats a bit of a strawman, but this article titled File Not Found[a].
=> https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z [a]
The gist of the article is that Gen-Z kids are too used to search interfaces. That means they dont actually know about where files are stored, or how they are organized. They only know that they can access the files by searching for them. The article talks about how professors ended up having to teach them how to navigate directory structures and file extensions.
As the article claims, it seems to be related to how modern user interfaces are designed. Our UIs nowadays are more focused around search capabilities: you just type in a search bar and find what you need.
=> /img/app-search-bar.png bemenu, displaying a partial search and several matching applications.
In some sense I do like this sort of interface. I use something like that when launching applications, both on my Desktop and on my laptop! Its actually a better interface compared to hunting for icons on your desktop. I use similar interfaces in VSCode to switch between open editor tabs.
However, this is a complimentary interface to hierarchy and organization. Going back to the file systems example discussed in the article, being able to search through your files and folders is useful. But its not a replacement for hierarchy. You cant just throw files into a folder, and expect to always find them accurately.
Let me give an example with Google Photos. I have been keeping all my photos on Google Photos, and between migrating photos from old phones and ones I have taken on new phones, I have over 8,000 photos. This is completely disorganized of course, but Google Photos has a search functionality. It even uses AI to recognize the items in the photos, which you can use in the search. A search for “tree” brings up photos of trees, “cat” brings up cats, and you can even tag people and pets and then search for their names. Very useful, right?
Well, it is sometimes. I recently had to remember what my wifes car license plate is. A quick search for “license plate” on google photos and luckily, I had taken a photo of her car that included the license plate in the frame. Success! On the other hand, I was trying to find some photos from a particular gathering with my friends. Searches for their names, names of the place, or stuff I know are in the picture turned up with nothing. I eventually had to painstakingly scroll through all photos to find the one I wanted.
This reminds me of 2 things. One is this article named To Organize The Worlds Information[a] by @dkb868@twitter.com[b]. One thing I found interesting on that article was that the concept of “the library” has been lost over the last few decades as a way to organize information. They define the library as a hierarchical, categorized directory of information. The article also talks about other organizational methods, and is worth a read.
=> https://dkb.io/post/organize-the-world-information [a]
=> https://nitter.net/dkb868 [b]
The other thing is the note taking software were building at my workplace, Dendron[a]. One of the core tenets of Dendron is that the information is hierarchical. Something the founder Kevin recognizes was that other note taking software make it easier to make new notes, but they dont support hierarchical structures which makes it hard to find those notes later. Ive also experienced this, when I used other note taking software (or sticky notes!) I found that it was easy to just jot down a few notes, but they very quickly get lost or hard to find when you need them. A hierarchical organization makes it possible to actually find and reference the information later.
=> https://dendron.so/ [a]
Requiring organization creates a barrier of entry to storing information, but what good is storing information if you cant retrieve the information later? This seems to work pretty well with Dendron. Would it not work for other things? Why not for taking photos? You of course want to be able to quickly snap a photo so you can record a moment before its gone, but perhaps you could be required to organize your photos afterwards. Before modern cellphones & internet connected cameras, youd have to get your photos developed or transfer them off an SD card: a step where you would have to (or have the opportunity to) organize your photos. I wonder if we cloud services could ask you to organize your photos before syncing them as well.

View file

@ -1,64 +0,0 @@
~~~~~~~~
title: Taking Backups with Duplicity
## date: 2015-05-16
I wanted to start taking backups for some time, but I havent had the time to do any research and set everything up. After reading another horror story that was saved by backups[a], I decided to start taking some backups.
=> https://www.reddit.com/r/linuxmasterrace/comments/35ljcq/couple_of_days_ago_i_did_rm_rf_in_my_home/ [a]
```<!--more-->
```
After doing some research on backup options, I decided on duplicity[a]. The backups are compressed, encrypted and incremental, both saving space and ensuring security. It supports both local and ssh files(as well as many other protocols), so it has everything I need.
=> http://duplicity.nongnu.org/ [a]
I first took a backup into my external hard drive, then VPS. The main problem I encountered was that duplicity uses paramiko[a] for ssh, but it wasnt able to negotiate a key exchange algorithm with my VPS. Luckily, duplicity also supports pexpect[b], which uses OpenSSH. If you encounter the same problem, you just need to tell duplicity to use pexpect backend by prepending your url with pexpect+, like pexpect+ssh://example.com.
=> https://github.com/paramiko/paramiko [a]
=> http://pexpect.sourceforge.net/pexpect.html [b]
Duplicity doesnt seem to have any sort of configuration files of itself, so I ended up writing a small bash script to serve as a sort of configuration, and also keep me from running duplicity with wrong args. I kept forgetting to add an extra slash to file://, causing duplicity to backup my home directory into my home directory! :D
If anyone is interested, heres the script:
```#!/bin/bash
if [[ $(id -u) != "0" ]]; then
read -p "Backup should be run as root! Continue? [y/N]" yn
case $yn in
[Yy]*) break;;
*) exit;;
esac
fi
if [[ $1 = file://* ]]; then
echo "Doing local backup."
ARGS="--no-encryption"
if [[ $1 = file:///* ]]; then
URL=$1
else
echo "Use absolute paths for backup."
exit 1
fi
elif [[ $1 = scp* ]]; then
echo "Doing SSH backup."
ARGS="--ssh-askpass"
URL="pexpect+$1"
else
echo "Unknown URL, use scp:// or file://"
exit 1
fi
if [[ -n "$1" ]]; then
duplicity $ARGS --exclude-filelist /home/kaan/.config/duplicity-files /home/kaan "$URL/backup"
else
echo "Please specify a location to backup into."
exit 1
fi
```

View file

@ -1,176 +0,0 @@
~~~~~~~~
title: Emacs as an operating system date: 2016-04-14
## modified: 2016-05-29
Emacs is sometimes jokingly called a good operating system with a bad text editor. Over the last year, I found myself using more and more of Emacs, so I decided to try out how much of an operating system it is. Of course, operating system here is referring to the programs that the user interacts with, although I would love to try out some sort of Emacs-based kernel.
```<!--more-->
```
# Emacs as a terminal emulator / multiplexer
Terminals are all about text, and Emacs is all about text as well. Not only that, but Emacs is also very good at running other processes and interacting with them. It is no surprise, I think, that Emacs works well as a terminal emulator.
Emacs comes out of the box with shell and term. Both of these commands run the shell of your choice, and give you a buffer to interact with it. Shell gives you a more emacs-y experience, while term overrides all default keymaps to give you a full terminal experience.
=> /img/emacs-terminal.png Emacs as a terminal emulator
To use emacs as a full terminal, you can bind these to a key in your window manager. Im using i3, and my keybinding looks like this:
```bindsym $mod+Shift+Return exec --no-startup-id emacs --eval "(shell)"
```
You can also create a desktop file to have a symbol to run this on a desktop environment. Try putting the following text in a file at ~/.local/share/applications/emacs-terminal.desktop:
```[Desktop Entry]
Name=Emacs Terminal
GenericName=Terminal Emulator
Comment=Emacs as a terminal emulator.
Exec=emacs --eval '(shell)'
Icon=emacs
Type=Application
Terminal=false
StartupWMClass=Emacs
```
If you want to use term instead, replace (shell) above with (term "/usr/bin/bash").
A very useful feature of terminal multiplexers is the ability to leave the shell running, even after the terminal is closed, or the ssh connection has dropped if you are connection over that. Emacs can also achieve this with its server-client mode. To use that, start emacs with emacs --daemon, and then create a terminal by running emacsclient -c --eval '(shell)'. Even after you close emacsclient, since Emacs itself is still running, you can run the same command again to get back to your shell.
One caveat is that if there is a terminal/shell already running, Emacs will automatically open that whenever you try opening a new one. This can be a problem if you are using Emacs in server-client mode, or want to have multiple terminals in the same window. In that case, you can either do M-x rename-uniquely to change the name of the existing terminal, which will make Emacs create a new one next time, or you can add that to hook in your init.el to always have that behaviour:
```(add-hook 'shell-mode-hook 'rename-uniquely)
(add-hook 'term-mode-hook 'rename-uniquely)
```
# Emacs as a shell
Of course, it is not enough that Emacs works as a terminal emulator. Why not use Emacs as a shell directly, instead of bash/zsh? Emacs has you covered for that too. You can use eshell, which is a shell implementation, completely written in Emacs Lisp. All you need to do is press M-x eshell.
=> /img/eshell.png Eshell, Emacs shell
The upside is that eshell can evaluate and expand lisp expressions, as well as redirecting the output to Emacs buffers. The downside is however, eshell is not feature complete. It lacks some features such as input redirection, and the documentation notes that it is inefficient at piping output between programs.
If you want to use eshell instead of shell or term, you can replace shell in the examples of terminal emulator section with eshell.
# Emacs as a mail cilent
Zawinskis Law[a]: Every program attempts to expand until it can read mail. Of course, it would be disappointing for Emacs to not handle mail as well.
=> http://www.catb.org/~esr/jargon/html/Z/Zawinskis-Law.html [a]
Emacs already ships with some mail capability. To get a full experience however, Id recommend using mu4e[a] (mu for emacs). I have personally set up OfflineIMAP[b] to retrieve my emails, and mu4e gives me a nice interface on top of that.
=> http://www.djcbsoftware.nl/code/mu/mu4e.html [a]
=> http://www.offlineimap.org/ [b]
=> /img/mu4e.png mu4e, mail client
Im not going to talk about the configurations of these programs, Id recommend checking out their documentations. Before ending this section, I also want to mention mu4e-alert[a] though.
=> https://github.com/iqbalansari/mu4e-alert [a]
# Emacs as a feed reader (RSS/Atom)
Emacs handles feeds very well too. The packages Im using here are Elfeed[a] and Elfeed goodies[b]. Emacs can even show images in the feeds, so it covers everything I need from a feed reader.
=> https://github.com/skeeto/elfeed [a]
=> https://github.com/algernon/elfeed-goodies [b]
=> /img/elfeed.png Elfeed, feed reader
# Emacs as a file manager
Why use a different program to manage your files when you can use Emacs? Emacs ships with dired, as well as image-dired. This gives you a file browser, with optional image thumbnail support.
# Emacs as a document viewer
Want to read a pdf? Need a program to do a presentation? Again, Emacs.
=> /img/docview.png Docview, document viewer
Emacs comes with DocView[a] which has support for PDF, OpenDocument and Microsoft Office files. It works surprisingly well.
=> https://www.gnu.org/software/emacs/manual/html_node/emacs/Document-View.html [a]
Also, PDF Tools[a] brings even more PDF viewing capabilities to Emacs, including annotations, text search and outline. After installing PDF Tools, Emacs has become my primary choice for reading PDF files.
=> https://github.com/politza/pdf-tools [a]
# Emacs as a browser
Emacs comes out of box with eww[a], a text-based web browser with support for images as well.
=> https://www.gnu.org/software/emacs/manual/html_node/eww/index.html#Top [a]
=> /img/eww.png eww, browser
Honestly, I dont think Ill be using Emacs to browse the web. But still, it is nice that the functionality is there.
# Emacs as a music player
Emacs can also act as a music player thanks to EMMS[a], Emacs MultiMedia System. If you are wondering, it doesnt play the music by itself but instead uses other players like vlc or mpd.
=> https://www.gnu.org/software/emms/ [a]
It has support for playlists, and can show thumbnails as well. For the music types, it supports whatever the players it uses support, which means that you can basically use file type.
# Emacs as a IRC client
I dont use IRC a lot, but Emacs comes out of the box with support for that as well thanks to ERC[a].
=> https://www.emacswiki.org/emacs?action=browse;oldid=EmacsIrcClient;id=ERC [a]
=> /img/erc.png erc, Emacs IRC client
# Emacs as a text editor
Finally, Emacs also can work well as a text editor.
Emacs is a pretty fine text editor out of the box, but I want to mention some packages here.
First, multiple cursors[a]. Multiple cursors mode allows you to edit text at multiple places at the same time.
=> https://github.com/magnars/multiple-cursors.el [a]
I also want to mention undo-tree[a]. It acts like a mini revision control system, allowing you to undo and redo without ever losing any text.
=> http://www.dr-qubit.org/emacs.php#undo-tree [a]
Another great mode is iy-go-to-char[a]. It allows you to quickly jump around by going to next/previous occurrances of a character. It is very useful when you are trying to move around a line.
=> https://github.com/doitian/iy-go-to-char [a]
Ace Jump Mode[a] allows you to jump around the visible buffers. It can jump around based on initial characters of words, or jump to specific lines. It can also jump from one buffer to another, which is very useful when you have several buffers open in your screen.
=> https://github.com/winterTTr/ace-jump-mode/ [a]
=> /img/ace-jump-mode.png Ace Jump Mode
Finally, I want to mention ag.el[a], which is an Emacs frontend for the silver searcher. If you dont know about ag, it is a replacement for grep that recursively searches directories, and has some special handling for projects, and is very fast.
=> https://github.com/Wilfred/ag.el [a]
# Emacs as an IDE
People sometimes compare Emacs to IDEs and complain that a text editor such as Emacs doesnt have enough features. What they are forgetting, of course, is that Emacs is an operating system, and we can have an IDE in it as well.
There are different packages for every language, so Ill be only speaking on language agnostic ones.
For interacting with git, magit[a] is a wonderful interface.
=> http://magit.vc/ [a]
=> /img/magit.png Magit, Git Porcelain
For auto-completion, Company mode[a] works wonders. I rely heavily on completion while writing code, and company mode has support for anything I tried writing.
=> https://company-mode.github.io/ [a]
If you like having your code checked as you type, flycheck[a] has you covered. It has support for many tools and languages.
=> https://www.flycheck.org/ [a]
=> /img/company-flycheck.png Company Mode and Flycheck

View file

@ -1,73 +0,0 @@
~~~~~~~~
title: Getting Deus Ex GOTY Edition running on Linux
## date: 2022-03-12
Ive been struggling with this for a few hours, so I might as well document how I did it.
I have a particular setup, which ended up causing issues. Most important are that Im using Sway, a tiling Wayland compositor, and a flatpak install of Steam.
## Mouse doesnt move when the game is launched
It looks like theres a problem with the game window grabbing the cursor on my setup, so moving the mouse doesnt move the cursor in the game and if you move it too much to the side it takes you out of the game window.
The solution to this is using Gamescope, which is a nested Wayland compositor that makes the window inside it play nice with your actual compositor.
Because Im using the flatpak install of Steam, I needed to install the flatpak version of gamescope[a]. One catch here is that for me, this wouldnt work if I also had the flatpak MangoHud installed. The only solution I could come up with right now was to uninstall MangoHud.
=> https://github.com/flathub/com.valvesoftware.Steam.Utility.gamescope [a]
```flatpak remove org.freedesktop.Platform.VulkanLayer.MangoHud # if you have it installed
flatpak install com.valvesoftware.Steam.Utility.gamescope
```
Then, right click on the game and select properties, then in launch options type gamescope -f -- %command%. This will launch the game inside gamescope, and the cursor should move inside the game now.
## The game is too dark to see anything
It looks like the game relied on some old DirectX or OpenGL features or something, because once you do launch into the game, everything is extremely dark and hard to see. At first I was wondering how anyone could play the game like this, but it turns out thats not how the game is supposed to look!
I finally managed to solve this by following the installer steps for the Deus Ex CD on Lutris[a]. Yeah, roundabout way to solve it, but it worked.
=> https://lutris.net/games/install/948/view [a]
First download the updated D3D9 and OpenGL renderers from the page, and extract them into the System folder inside the game.
```cd "$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/common/Deus Ex/System"
wget https://lutris.net/files/games/deus-ex/dxd3d9r13.zip
wget https://lutris.net/files/games/deus-ex/dxglr20.zip
unzip dxd3d9r13.zip
unzip dxglr20.zip
```
Next, download and install the 1112fm patch.
```cd "$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/common/Deus Ex/System"
wget https://lutris.net/files/games/deus-ex/DeusExMPPatch1112fm.exe
env WINEPREFIX="$HOME/.var/app/com.valvesoftware.Steam/.steam/steam/steamapps/compatdata/6910/pfx/" wine DeusExMPPatch1112fm.exe
```
Follow the steps of the installer. It should automatically find where the game is installed. Once the install is done, launch the game, then head into the settings and pick “Display Settings”, then “Rendering Device”. In the renderer selection window, pick “Show all devices”, and then select “Direct3D9 Support”.
=> /img/deus-ex-render-settings.png Launch back into the game, head into the display settings again, pick your resolution, and restart the game. Then head into the display settings yet again, this time change the color depth to 32 bit. Restart once more. Yes, you do have to do them separately or the game doesnt save the color depth change for some reason. Finally, you can start playing!
=> /img/deus-ex-renderer-comparison.png ## Other small issues
Here are a few more issues you might hit during this whole process:
> My cursor moves too fast!
You need to turn down the cursor speed. My mouse has buttons to adjust the speed on the fly, so I use those to turn down the speed.
> After changing resolution, I cant move my cursor!
Use the keyboard shortcuts (arrow keys and enter) to exit the game. It should work again when you restart.
> The cursor doesnt move when I open the game, even with gamescope!
Im not fully sure why or how this happens, but a few things I found useful:
* When the game is launching, and its showing the animation of the studio logo, dont click! Press escape to bring up the menu instead.
* Press escape to bring the menu up, then hit escape again to dismiss it. It sometimes starts working after that.
* Use the keyboard to exit the game and restart. It always works the next time for me.

View file

@ -1,36 +0,0 @@
# Homepage of Kaan Barmore-Genç
Hey folks!
I'm a Software Engineer at Dendron, and a recent Master's graduate from the Ohio
State University. I'm an avid Linux user, an enthusiast of many programming
languages, a home cook, and an amateur gardener.
=> https://dendron.so Dendron
=> https://bgenc.net/recipes/ My recipes
My interests include building web and mobile applications, both at the front and
back end. Over the years I learned and used many programming languages and
technologies, including JavaScript, TypeScript, React, React Native, Rust,
Python, Java, C, C++, Clojure, and Haskell. Pretty much everthing I've worked on
is open source and available on my Github page.
I published several papers and participated in academic reviews during graduate school. You can find them below.
=> /publications.gmi My publications
Here are some links if you need to reach me.
=> mailto:kaan@bgenc.net kaan@bgenc.net
=> /extra/kaangenc.gpg GPG key
=> https://github.com/SeriousBug Github
=> https://www.linkedin.com/in/kaan-genc-8489b9205/ LinkedIn
=> /extra/cv.pdf My CV
=> https://mastodon.technology/@kaan My Mastodon
This page is also available on HTTP/HTML if you prefer that.
=> https://bgenc.net HTTP mirror
Finally, below is a list of all my blog posts. These are not sorted by date at the moment, but I'm working on fixing that soon.

View file

@ -1,49 +0,0 @@
~~~~~~~~
title: Mass batch processing on the CLI
## date: 2022-03-19
> This post is day 4 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
Some time ago, I needed to process a lot of video files with vlc. This is usually pretty easy to do, for file in *.mp4 ; do ffmpeg ... ; done is about all you need in most cases. However, sometimes the files you are trying to process are in different folders. And sometimes you want to process some files in a folder but not others. Thats the exact situation I was in, and I was wondering if I needed to find some graphical application with batch processing capabilities so I can queue up all the processing I need.
After a bit of thinking though, I realized I could do this very easily with a simple shell script! That shell script lives in my mark-list[a] repository.
=> https://github.com/SeriousBug/mark-list [a]
The idea is simple, you use the command to mark a bunch of files. Every file you mark is saved into a file for later use.
```$ mark-list my-video.mp4 # Choose a file
Marked 1 file.
$ mark-list *.webm # Choose many files
Marked 3 files.
$ cd Downloadsr
$ mark-list last.mpg # You can go to other directories and keep marking
```
You can mark a single file, or a bunch of files, or even navigate to other directories and mark files there.
Once you are done marking, you can recall what you marked with the same tool:
```$ mark-list --list
/home/kaan/my-video.mp4
/home/kaan/part-1.webm
/home/kaan/part-2.webm
/home/kaan/part-3.webm
/home/kaan/Downloads/last.mpg
```
You can then use this in the command line. For example, I was trying to convert everything to mkv files.
```for file in `mark-list --list` ; do ffmpeg -i "${file}" "${file}.mkv" ; done
```
It works! After you are done with it, you then need to clear out your marks:
```mark-list --clear
```
Hopefully this will be useful for someone else as well. It does make it a lot easier to just queue up a lot of videos, and convert all of them overnight.

View file

@ -1,66 +0,0 @@
~~~~~~~~
title: Motion Interpolation, 24 FPS to 60 FPS with mpv, VapourSynth and MVTools date: 2015-07-18
## modified: 2015-07-20
Watching videos at 60 FPS is great. It makes the video significantly smoother and much more enjoyable. Sadly, lots of movies and TV shows are still at 24 FPS. However, I recently discovered that it is actually possible to interpolate the extra frames by using motion interpolation, and convert a video from 24 FPS to 60 FPS in real time. While it is far from perfect, I think the visual artifacts are a reasonable tradeoff for high framerate.
```<!--more-->
```
Firstly, what we need is mpv with VapourSynth enabled, and MVTools plugin for VapourSynth. VapourSynth must be enabled while compiling mpv. I adopted an AUR package mpv-vapoursynth[a] which you can use if you are on Arch. Otherwise, all you need to do is use --enable-vapoursynth flag when doing ./waf --configure. They explain the compilation on their repository[b], so look into there if you are compiling yourself.
=> https://aur4.archlinux.org/packages/mpv-vapoursynth/ [a]
=> https://github.com/mpv-player/mpv [b]
After that, we need MVTools plugin for VapourSynth. This is available on Arch via vapoursynth-plugin-mvtools[a], otherwise you can find their repository here[b]. There is also a PPA for Ubuntu[c] where you can find vapoursynth-extra-plugins, but I havent used it myself so I cant comment on it.
=> https://www.archlinux.org/packages/community/x86_64/vapoursynth-plugin-mvtools/ [a]
=> https://github.com/dubhater/vapoursynth-mvtools [b]
=> https://launchpad.net/~djcj/+archive/ubuntu/vapoursynth [c]
After both of these are enabled, we need a script to use MVTools from VapourSynth. There is one written by Niklas Haas, which you can find here as mvtools.vpy[a]. Personally, I tweaked the block sizes and precision to my liking, as well as removing the resolution limit he added. Ill put the modified version here:
=> https://github.com/haasn/gentoo-conf/blob/master/home/nand/.mpv/filters/mvtools.vpy [a]
```# vim: set ft=python:
import vapoursynth as vs
core = vs.get_core()
clip = video_in
dst_fps = display_fps
# Interpolating to fps higher than 60 is too CPU-expensive, smoothmotion can handle the rest.
while (dst_fps > 60):
dst_fps /= 2
# Skip interpolation for 60 Hz content
if not (container_fps > 59):
src_fps_num = int(container_fps * 1e8)
src_fps_den = int(1e8)
dst_fps_num = int(dst_fps * 1e4)
dst_fps_den = int(1e4)
# Needed because clip FPS is missing
clip = core.std.AssumeFPS(clip, fpsnum = src_fps_num, fpsden = src_fps_den)
print("Reflowing from ",src_fps_num/src_fps_den," fps to ",dst_fps_num/dst_fps_den," fps.")
sup = core.mv.Super(clip, pel=1, hpad=8, vpad=8)
bvec = core.mv.Analyse(sup, blksize=8, isb=True , chroma=True, search=3, searchparam=1)
fvec = core.mv.Analyse(sup, blksize=8, isb=False, chroma=True, search=3, searchparam=1)
clip = core.mv.BlockFPS(clip, sup, bvec, fvec, num=dst_fps_num, den=dst_fps_den, mode=3, thscd2=12)
clip.set_output()
```
At this point, you should be able to try this out as suggested in the script. To set this up more permanently, Id suggest placing this script as ~/.config/mpv/mvtools.vpy, and also writing the following as ~/.config/mpv/mpv.conf:
```hwdec=no
vf=vapoursynth=~/.config/mpv/mvtools.vpy
```
Now, whenever you open mpv, it will always use motion interpolation.
The result is fairly good. I noticed some significant artifacts while watching anime, but it works well with movies. Im guessing that it is harder to track the motion in animations since they are generally exaggerated.
One thing to keep in mind, however, is performance. With rel=2, viewing a 1080p video results in around 90% CPU usage across all cores and 1.6 GBs of ram on my Intel i7 4700MQ. With rel=1, CPU usage goes down to about 60% per core. This process is very heavy on the processor, and you may have trouble unless you have a fast CPU.

View file

@ -1,27 +0,0 @@
~~~~~~~~
title: My response to Aurynn Shaws “Contempt Culture” post
## date: 2022-03-27
> This post is day 6 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
I recently came across Aurynn Shaws article on “Contempt Culture”[a]. Im a bit late to the party, but I wanted to talk about this too.
=> https://blog.aurynn.com/2015/12/16-contempt-culture/ [a]
Aurynns article talks about how some programming languages are considered inferior, and programmers using these languages are considered less competent. Its a good article, and you should take a look at it if you havent.
## my thoughts
One thing Ive come to realize over the years is that there are really no “bad programming languages”. Ignoring esolangs like brainfuck which are not really meant to be used for anything serious, most programming languages are designed to fit a niche. Im using the term like its used in ecology: every programming language has a place in the ecosystem of technology and programming.
PHP is bad? PHP certainly has its drawbacks, but it also has its advantages. “Drop these files into a folder and it works” is an amazing way to get started programming. Its also a great way to inject a bit of dynamic content into otherwise static pages. In fact, its simpler and more straightforward solution than building a REST API and a web app where you have to re-invent server side rendering just to get back to where PHP already was!
Thats not to say PHP is perfect or the best language to use. Its a language I personally dont like. But that doesnt make it a bad or “stupid” programming language. At worst its a programming language that doesnt fit my needs. If I extrapolate that and say that PHP is a bad language, that would instead show my ego. Do I really think Im so great that anything I dont like is just immediately bad? Something Aurynn said resonates with me here:
> It didnt matter that it was (and remains) difficult to read, it was that we were better for using it.
I just want to conclude this with one thing: next time you think a programming language or tool or whatever is bad, think to yourself whether thats because it doesnt feel cool or because you saw others making fun of it, or because you actually evaluated the pros and cons and came up with a calculated decision.

View file

@ -1,24 +0,0 @@
~~~~~~~~
title: Switching to pass
## date: 2015-03-30
For some time, I used LastPass to store my passwords. While LastPass works well, it doesnt fit into the keyboard driven setup I have. I have been looking into alternatives for some time, I looked into KeePassX but just like LastPass, it doesnt give me any ways to set up keyboard shortcuts. On the other hand, and I recently came across pass[a], and it provides everything I want.
=> http://www.passwordstore.org/ [a]
```<!--more-->
```
Pass uses GPG keys to encrypt the passwords, and git to keep revisions and backups. It integrates well with the shell, and there is a dmenu script, a Firefox plugin and an Android app. All the passwords are just GPG enrypted files, stored in some folders anyway, so you dont need anything special to work with them.
=> /img/passmenu.png passmenu, the dmenu pass script
So first, I needed to migrate my passwords from LastPass to pass. The website lists some scripts for migration, but sadly I missed that when I first looked at the page. So I decided to write a python script to handle the migration[a] myself. It inserts all passwords in domain/username format, and if there is any extra data written, it is added after the password as well. Secure notes are placed into their own folder, and any “Generated Password for …” entries are skipped. If youre migrating from LastPass to pass, feel free to give it a try. If you are taking an export from their website however, do make sure that there is no whitespace before and after the csv.
=> https://gist.github.com/SeriousBug/e9f33873d10ad944cbe6 [a]
=> /img/password_store.png Password Store, the pass Android app
I certainly recommend trying out pass. It works very well, and it fits in with the unix philosophy.

View file

@ -1,76 +0,0 @@
~~~~~~~~
## no-ttr: true
```<div> <div class="publication">
## Crafty: Efficient, HTM-Compatible Persistent Transactions
<div class="authors">Kaan Genç, Michael D. Bond, and Guoqing Harry Xu</div>
<div class="conf">ACM SIGPLAN Conference on Programming Language Design and Implementation <a href="https://pldi20.sigplan.org/home">(PLDI 2020)</a>, Online, June 2020</div>
```
Crafty is a library for transactional storage, built for new non-volatile memory hardware. Taking advantage of hardware transactional capabilities of modern CPUs, it provides a low-overhead option that also eliminates the need for additional concurrency control.
Talk[a] Paper[b] Extended Paper[c] Implementation[d] Poster[e] </div>
=> https://www.youtube.com/watch?v=wdVLlQXV1to [a]
=> https://dl.acm.org/doi/10.1145/3385412.3385991 [b]
=> https://arxiv.org/pdf/2004.00262.pdf [c]
=> https://github.com/PLaSSticity/Crafty [d]
=> /extra/Crafty Poster.pdf [e]
```<div class="publication">
## Dependence Aware, Unbounded Sound Predictive Race Detection
<div class="authors">Kaan Genç, Jake Roemer, Yufan Xu, and Michael D. Bond</div>
<div class="conf">ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications <a href="https://2019.splashcon.org/track/splash-2019-oopsla">(OOPSLA 2019)</a>, Athens, Greece, October 2019</div>
```
This paper presents 2 data race detection analyses which analyze a single run of a program to predict data races that can happen in other runs. These analyses take advantage of data and control flow dependence to accurately understand how the analyzed program works, expanding what races can be predicted.
Talk[a] Extended Paper (updated version)[b] Paper[c] Corrigendum to paper[d] Implementation[e] Poster[f] </div>
=> https://www.youtube.com/watch?v=YgZWnc31tVQ [a]
=> https://arxiv.org/pdf/1904.13088.pdf [b]
=> https://dl.acm.org/doi/10.1145/3360605 [c]
=> https://dl.acm.org/action/downloadSupplement?doi=10.1145%2F3360605&file=3360605-corrigendum.pdf [d]
=> https://github.com/PLaSSticity/SDP-WDP-implementation [e]
=> /extra/DepAware Poster.pdf [f]
```<div class="publication">
## SmartTrack: Efficient Predictive Race Detection
<div class="authors">Jake Roemer, Kaan Genç, and Michael D. Bond</div>
<div class="conf">ACM SIGPLAN Conference on Programming Language Design and Implementation <a href="https://pldi20.sigplan.org/home">(PLDI 2020)</a>, Online, June 2020 </div>
```
Predictive data race detection methods greatly improve the number of data races found, but they typically significantly slow down programs compared to their non-predictive counterparts. SmartTrack, through improved analyses and clever algorithms, reduces their overhead to just around non-predictive analyses without impacting their performance.
Paper[a] Extended Paper[b] </div>
=> http://web.cse.ohio-state.edu/~mikebond/smarttrack-pldi-2020.pdf [a]
=> https://arxiv.org/pdf/1905.00494.pdf [b]
```<div class="publication">
## High-Coverage, Unbounded Sound Predictive Race Detection
<div class="authors">Jake Roemer, Kaan Genç, and Michael D. Bond</div>
<div class="conf">ACM SIGPLAN Conference on Programming Language Design and Implementation <a href="https://pldi18.sigplan.org/">(PLDI 2018)</a>, Philadelphia, PA, USA, June 2018</div>
```
Predictive data race detection methods typically walk a tight line between predicting more races and avoiding false races. This paper presents a new analysis that can predict more races, and a method to efficiently eliminate false races.
Paper[a] Extended Paper[b] </div> </div>
=> http://web.cse.ohio-state.edu/~bond.213/vindicator-pldi-2018.pdf [a]
=> http://web.cse.ohio-state.edu/~bond.213/vindicator-pldi-2018-xtr.pdf [b]
# Activities
PLDI 2021[a] Artifact Evaluation Committee member
=> https://pldi21.sigplan.org/track/pldi-2021-PLDI-Research-Artifacts [a]
ASPLOS 2021[a] Artifact Evaluation Committee member
=> https://asplos-conference.org/2021/ [a]
OOPSLA 2020[a] Artifact Evaluation Committee member
=> https://2020.splashcon.org/track/splash-2020-Artifacts [a]

View file

@ -1,94 +0,0 @@
~~~~~~~~
title: My local data storage setup
## date: 2022-03-10
Recently, Ive needed a bit more storage. In the past Ive relied on Google Drive, but if you need a lot of space Google Drive becomes prohibitively expensive. The largest option available, 2 TB, runs you $100 a year at the time of writing. While Google Drive comes with a lot of features, it also comes with a lot of privacy concerns, and I need more than 2 TB anyway. Another option would be Backblaze B2 or AWS S3, but the cost is even higher. Just to set a point of comparison, 16 TB of storage would cost $960 a year with B2 and a whopping $4000 a year with S3.
Luckily in reality, the cost of storage per GB has been coming down steadily. Large hard drives are cheap to come by, and while these drives are not incredibly fast, they are much faster than the speed of my internet connection. Hard drives it is then!
While I could get a very large hard drive, its generally a better idea to get multiple smaller hard drives. Thats because these drives often offer a better $/GB rate, but also because it allows us to mitigate the risk of data loss. So after a bit of search, I found these “Seagate Barracuda Compute 4TB” drives. You can find them on Amazon[a] or BestBuy[b].
=> https://www.amazon.com/gp/product/B07D9C7SQH/ [a]
=> https://www.bestbuy.com/site/seagate-barracuda-4tb-internal-sata-hard-drive-for-desktops/6387158.p?skuId=6387158 [b]
These hard drives are available for $70 each at the time Im writing this,and I bought 6 of them. This gets me to around $420, plus a bit more for SATA cables. Looking at Backblaze Hard Drive Stats[a], I think its fair to assume these drives will last at least 5 years. Dividing the cost by the expected lifetime, that gets me $84 per year, far below what the cloud storage costs! Its of course not as reliable, and it requires maintenance on my end, but the difference in price is just too far to ignore.
=> https://www.backblaze.com/blog/backblaze-drive-stats-for-2021/ [a]
## Setup
I decided to set this all up inside my desktop computer. I have a large case so fitting all the hard drives in is not a big problem, and my motherboard does support 6 SATA drives (in addition to the NVMe that Im booting off of). I also run Linux on my desktop computer, so Ive got all the required software available.
For the software side of things, I decided to go with mdadm and ext4. There are also other options available like ZFS (not included in the linux kernel) or btrfs (raid-5 and raid-6 are known to be unreliable), but this was the setup I found the most comfortable and easy to understand for me. How it works is that mdadm combines the disks and presents it as a block device, then ext4 formats and uses the block device the same way you use it with any regular drive.
### Steps
I was originally planning to write the steps I followed here, but in truth I just followed whatever the ArchLinux wiki[a] told me. So Ill just recommend you follow that as well.
=> https://wiki.archlinux.org/title/RAID#Installation [a]
The only thing Ill warn you is that the wiki doesnt clearly note just how long this process takes. It took almost a week for the array to build, and until the build is complete the array runs at a reduced performance. Be patient, and just give it some time to finish. As a reminder, you can always check the build status with cat /dev/mdstat.
## Preventative maintenance
Hard drives have a tendency to fail, and because RAID arrays are resilient, the failures can go unnoticed. You need to regularly check that the array is okay. Unfortunately, while there are quite a few resources online on how to set up RAID, very few of them actually talk about how to set up scrubs (full scans to look for errors) and error monitoring.
For my setup, I decided to set up systemd to check and report issues. For this, I first set up 2 timers: 1 that checks if there are any reported errors on the RAID array, and another that scrubs the RAID array. Systemd timers are 2 parts, a service file and a timer file, so heres all the files.
* array-scrub.service ```toml [Unit] Description=Scrub the disk array After=multi-user.target OnFailure=[a]
=> report-failure-email@array-scrub.service [a]
[Service] Type=oneshot User=root ExecStart=bash -c /usr/bin/echo check > /sys/block/md127/md/sync_action
[Install] WantedBy=multi-user.target ```
* array-scrub.timer ```toml [Unit] Description=Periodically scrub the array.
[Timer] OnCalendar=Sat --* 05:00:00
[Install] WantedBy=timers.target ```
The timer above is the scrub operation, it tells RAID to scan the drives for errors. It actually takes up to a couple days in my experience for the scan to complete, so I run it once a week.
* array-report.service ```toml [Unit] Description=Check raid array errors that were found during a scrub or normal operation and report them. After=multi-user.target OnFailure=[a]
=> report-failure-email@array-report.service [a]
[Service] Type=oneshot ExecStart=/usr/bin/mdadm -D /dev/md127
[Install] WantedBy=multi-user.target ```
* array-report.timer ```toml [Unit] Description=Periodically report any issues in the array.
[Timer] OnCalendar=daily
[Install] WantedBy=timers.target ```
And this timer above checks the RAID array status to see if there were any errors found. This timer runs much more often (once a day), because its instant, and also because RAID can find errors during regular operation even when you are not actively running a scan.
### Error reporting
Another important thing here is this line in the service file: toml OnFailure=report-failure-email@array-report.service
The automated checks are of no use if I dont know when something actually fails. Luckily, systemd can run a service when another service fails, so Im using this to report failures to myself. Heres what the service file looks like:
* report-failure-email@.service ```toml [Unit] Description=status email for %i to user
[Service] Type=oneshot ExecStart=/usr/local/bin/systemd-email address %i User=root ```
* /usr/local/bin/systemd-email ```sh #!/bin/sh
/usr/bin/sendmail -t <<ERRMAIL To: homelab@bgenc.net From: systemd <root@$HOSTNAME> Subject: Failure on $2 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8
$(systemctl status lines 100 no-pager “$2”) ERRMAIL ```
The service just runs this shell script, which is just a wrapper around sendmail. The %i in the service is the part after @ when you use the service, you can see that the OnFailure hook puts array-report after the @ which then gets passed to the email service, which then passes it on to the mail script.
To send emails, you also need to set up sendmail. I decided to install msmtp[a], and set it up to use my GMail account to send me an email.
=> https://wiki.archlinux.org/title/Msmtp [a]
To test if the error reporting works, edit the array-report.service and change the line ExecStart line to ExecStart=false. Then run the report service with systemd start array-report.service, you should now get an email letting you know that the array-report service failed, and attaches the last 100 lines of the service status to the email.

View file

@ -1,75 +0,0 @@
~~~~~~~~
title: Running graphical user services with systemd
## date: 2022-03-18
> This post is day 3 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
Ive recently switched from KDE Plasma to sway as my window manager. I had a problem with the change though: the amazing kdeconnect service werent working!
My first attempt at fixing this was to just add a lines into sway config to launch it along with sway.
```exec /usr/lib/kdeconnectd
```
Looks simple enough. But for some reason, kdeconnectd would just disappear after a while. It would appear to run at startup, and then an hour or two later I pull up the kdeconnect app on my phone and it would tell me that my computer is disconnected.
The biggest issue here was that I had no way to see why kdeconnect had failed. In comes systemd to save the day. Systemd is a service manager, so it will actually maintain the logs for these services. That means if kdeconnect is crashing, I can check the logs for kdeconnect to see why it crashed. I can also configure it to auto-restart after a crash if I want to.
To launch graphical applications with systemd though, you need to pass the appropriate environment variables to it so it knows how to launch new windows. I added this line to my sway config to do exactly that.
```# Pass all variables to dbus & systemd to run graphical user services
exec dbus-update-activation-environment --all --systemd
```
Next, we need to write a service files to run the application. This is easier than it sounds, heres the service file I wrote for kdeconnect:
```[Unit]
Description=Run kdeconnectd.
After=graphical-session.target
StartLimitIntervalSec=600
StartLimitBurst=5
[Service]
Type=basic
ExecStart=/usr/lib/kdeconnectd
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=graphical-session.target
```
I saved this as ~/.config/systemd/user/kdeconnectd.service. Finally, enabled it for my user with systemctl --user enable kdeconnectd.service and then restarted.
The service is configured to automatically restart on failure, but not if it failed more than 5 times in the last 10 minutes. Systemd also waits 5 seconds before trying to restart the failed service. This way if it crashes for some reason, it will restart. But if it keeps crashing rapidly, it wont keep trying to restart which could take up too much system resources.
I can now check how the service is doing with systemd!
```Warning: The unit file, source configuration file or drop-ins of kdeconnectd.service changed on disk. Run 'systemctl --user daemon-reload>
● kdeconnectd.service - Run kdeconnectd.
Loaded: loaded (/home/kaan/.config/systemd/user/kdeconnectd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-03-17 14:18:15 EDT; 1h 46min ago
Main PID: 2188363 (kdeconnectd)
Tasks: 6 (limit: 77007)
Memory: 24.2M
CPU: 2.440s
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/kdeconnectd.service
└─2188363 /usr/lib/kdeconnectd
Mar 17 14:20:58 eclipse systemd[817]: /home/kaan/.config/systemd/user/kdeconnectd.service:6: Unknown key name 'type' in section 'Service'>
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: QObject::connect(KWayland::Client::Registry, Unknown): invalid nullptr parameter
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: No Primary Battery detected on this system. This may be a bug.
Mar 17 15:16:11 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: Total quantity of batteries found: 0
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: QObject::connect(KWayland::Client::Registry, Unknown): invalid nullptr parameter
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: No Primary Battery detected on this system. This may be a bug.
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: kdeconnect.plugin.battery: Total quantity of batteries found: 0
Mar 17 15:23:26 eclipse kdeconnectd[2188363]: QMetaObject::invokeMethod: No such method KIO::StoredTransferJob::slotDataReqFromDevice()
Mar 17 15:24:35 eclipse kdeconnectd[2188363]: QMetaObject::invokeMethod: No such method KIO::StoredTransferJob::slotDataReqFromDevice()
Mar 17 15:57:29 eclipse systemd[817]: /home/kaan/.config/systemd/user/kdeconnectd.service:9: Unknown key name 'type' in section 'Service'>
```
A bunch of warnings so far, but no crashes yet. But if it does crash again, Ill finally know why.

View file

@ -1,37 +0,0 @@
~~~~~~~~
title: A little type system trick in Rust
## date: 2022-03-15
> This post is day 1 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
While working on a small project recently, I ended up writing this type in Rust.
```type ImageData = Arc<Mutex<Option<ImageBuffer<Rgba<u8>, Vec<u8>>>>>;
```
Even though I wrote it myself, it actually took me a bit after writing it to figure out what this type was doing so I wanted to write about it.
Let me start from outside-in, the first type we have is Arc. Arc stands for “atomic reference counting”. Reference counting[a] is a method to handle ownership of the data, or in other words to figure out when the data needs to be freed. Garbage collected languages do this transparently in the background, but in Rust we explicitly need to state that we want it. Atomic means this is done using atomic operations[b], so it is thread safe. In my case, I needed this because this data was going to be shared between multiple threads, and I didnt know exactly when I would be “done” with the data.
=> https://en.wikipedia.org/wiki/Reference_counting [a]
=> https://en.wikipedia.org/wiki/Linearizability#Primitive_atomic_instructions [b]
The next type is Mutex, which means mutual exclusion[a] or locking. Locks are used to restrict access to data to a single thread at a time. That means whatever type is inside of this is not thread safe, so Im using the lock to protect it. Which is true!
=> https://en.wikipedia.org/wiki/Lock_(computer_science) [a]
The type after that is Option. This basically means “nullable”, there may or may not be a thing inside this. The interesting thing here is that this is a sum type[a], so Rust helps remind us that this is nullable without introducing a nullability concept to the language. Its just part of the type system!
=> https://en.wikipedia.org/wiki/Tagged_union [a]
Then we have ImageBuffer, a type from the popular image crate[a]. Not much to talk about with this, thats the data I wanted to store.
=> https://docs.rs/image/latest/image/index.html [a]
The next thing that is interesting is the Rgba<u8> and Vec<u8> inside the image buffer. What that means (and Im speculating here because Im lazy/too busy to check), is that Rgba is just a basic wrapper type (or a “newtype”). It makes the compiler enforce the type of the image data thats stored in this image buffer, so the user doesnt mix up different data types. Similar for Vec<u8>, (I think) it means that the data inside this buffer is stored in a vector.
Finally, u8 is probably self descriptive, the pixels and the vector are made out of 8-bit unsigned integers.

View file

@ -1,73 +0,0 @@
~~~~~~~~
title: State of Rust GUIs
## date: 2022-03-17
> This post is day 2 of me taking part in the #100DaysToOffload[a] challenge.
=> https://100daystooffload.com/ [a]
The website Are we GUI Yet?[a] helpfully lists a lot of the libraries and frameworks available for making a GUI in Rust. Ive been looking into making a GUI program in Rust, so Ive been working my way through some of these options.
=> https://www.areweguiyet.com/ [a]
This is not a through review, just my thoughts after a brief look. Id recommend looking over the website and deciding for yourself.
## Best candidate: Dioxus
* Website: [a]
=> https://dioxuslabs.com/ [a]
Dioxus is probably the option I like the best from a quick look. Declarative applications similar to React, encapsulated components, first class async support, and good type checking.
Downsides? Right now its web only. Desktop applications are just web applications rendered inside a web view. Thats okay for cross platform apps, but not for what I want to do which is a lightweight native application.
## Better Electron: Tauri
* Website: [a]
=> https://github.com/tauri-apps/tauri [a]
Tauri is a really good replacement for Electron. You can see the comparison on their Github page, smaller binaries, less memory use, and faster launch times.
But again, it is a web app running in a web view. Not a native desktop app. Even though Tauri uses less memory than electron, it still uses ~180 MB according to their comparison. And the fast launch time is still around 0.4 seconds, way longer than what I would expect.
## My current preference: Slint
* Website: [a]
=> https://slint-ui.com/ [a]
I really like Slint. It is a native GUI with their own OpenGL renderer, and an optional Qt backend. From some basic experimentation, it seems to launch in less than 50ms, and uses less than 80 MB of memory (mostly shared libraries).
You can write the code in either .slint files (and they actually have okay editor support for this file type), or inside macros in your code files. The code also looks pretty intuitive.
The downsides? The theming support is not great/nonexistent, you cant dynamically generate UI elements (well kinda, you can generate them based on properties you change at runtime, but the components themselves are hardcoded), and the code sometimes gets awkward due to current limitations.
```MainWindow := Window {
// You then have to bind to this callback inside rust code. No way to just write a hook that calls a rust function.
callback save_to_file(string);
HorizontalLayout {
height: 32px;
FilePath := LineEdit {
placeholder_text: "placeholder here";
}
Button {
text: "Save to file";
clicked => { save_to_file(FilePath.text); }
}
}
}
```
There is also no way to do some things, like setting a dialog hint for your main window, which is something I needed to do.
## Conclusion?
It looks like the state of GUIs in rust is still “not yet”. There are a few more projects I need to look at, like Relm[a], but their code looks way too verbose to me. In the end, I think the best option might be to just write my GUI in C++ with Qt, and maybe integrate bits written in rust inside of that.
=> https://github.com/antoyo/relm [a]