Update on 15 May 2023: Several media outlets, rights groups and Free Software projects have published articles based on this writeup to spread the word. The ones we know about are- Scroll.in, Internet Freedom Foundation, The Register, MediaNama, Mountain Valley Kashmir (print version), Tutanota and Indian Pirates. Internet Freedom Foundation has also filed RTI requests to obtain more information.
A writ petition has been filed on the matter by Praveen and Kiran on behalf of FSCI with the assistance of lawyers from SFLC.in in the Kerala High Court.
As per media reports, 14 apps including Free Software ones like Element and Briar are banned in India as of 3rd May 1. As per reports the reasoning behind the ban seems to be, “These apps do not have any representatives in India and cannot be contacted for seeking information as mandated by the Indian laws”. This statement indicates to us that there are gaps in understanding on how federated services work (see Notes section below for a detailed explanation).
There is a lack of clarity on the manner in which the ban will be implemented. We assume that the applications will be de-listed from the app stores.
Element, the company behind Element app, has put out a statement 2 explaining their position on the ban. We get to know that Indian authorities have contacted them in the past to which they have responded constructively which goes against observed reasoning for the ban. Element also had to know about the ban from media reports since there was no communication informing them of the ban.
While Element never compromises end-to-end encryption or user privacy, we have been contacted by Indian authorities in the past and addressed them in a constructive fashion (typically responding same-day).
As we understand it, Indian government officials claim to have approved the ban due to Element (and other apps) not having representatives in India.
That is a bit of guesswork on our part, because we did not receive any prior notice of the decision; clarification from the Ministry of Electronics and Information Technology would be most welcome.
There seems to be a lack of understanding on part of the government on how these P2P software as well as federated apps work. These applications have been crucial for communication during disasters and are used regularly as communication medium in workplaces.
The ban, we believe, will not serve the purpose as there are many anonymous alternate apps that can be used by terror outfits to fill their purpose.
Federated, peer-to-peer, encrypted, Free Software apps/software like Element and Briar, should be promoted. They are key to our national security as they provide means to enable sovereign, private and secure communication to citizens of India. Element has been embraced by Governments of France 3, Germany 4 and Sweden 5 which should be an example for India.
Email is federated, has existed for a long time, and the logic which they applied to banning Element would apply to K-9 Mail, a Free Software client app for email, as well. Email service is provided by many service providers like Google, Microsoft and many others who don’t have any representatives in India. Matrix, like email, is federated and is the protocol behind the service. Element is just one of the matrix clients and matrix.org is just one of the matrix service providers. Banning all instances, clients and implementations of matrix is similar to banning all email service providers, email clients and whole email infrastructure, which would be nearly impossible for the government and a new service provider and/or client can come up very easily rather quickly.
Similar to how Matrix is federated, Briar is a P2P (peer-to-peer) app which means it does not even have a service provider and users who use it need to be online concurrently. Also it does not require internet connection and can be used over a bluetooth or wireless connection. It is useful in emergency situations like natural disasters where all other media for communications become offline.
We had to share the archive links to all element.io website links since it’s already blocked by multiple ISPs in India.
1: https://indianexpress.com/article/india/mobile-apps-blocked-jammu-kashmir-terrorists-8585046/
05 May 2023
Ever since I saw elementary os sporting the very legible Inter font for its UI and site, I wanted to make it part of my desktop too. The only problem was any font I'd choose got a little blurry. Not any more!
Open up /etc/environment
and then add the following line to the end of it.
FREETYPE_PROPERTIES="cff:no-stem-darkening=0 autofitter:no-stem-darkening=0"
This enables stem-darkening for all fonts. This just makes fonts a little bit bolder on smaller screens. This makes a heck of a difference for Inter.
Reboot your machine and enjoy beautiful fonts.
Here's a before
And an after
Happy Hacking & have a great day!
I attended FOSSASIA 2023 summit held at Lifelong Learning Institute, Singapore. A 3 day long parallel talk filled conference. Its my second time attending FOSSASIA. The first one was 2018 summit. Like last time, I didn’t attend much talks but focussed on networking with people. A lot of familiar faces there. PV Anthony, Harish, etc.
I vounteered to run Debian Booth at the exhibition hall distributing stickers, flyers. Rajudev also helped me at the booth. Most of the people there used debian or its derivates or know debian already, its easier for me that way, that I don’t have do much explaining compared to other booths. Thanks to Parth for looking after booth in my breaks.
Sometimes our booth also act as cloak room :). Ours was close to the entrance door and we may be similar faces to folks. So people come and drop their bags before they go to talks.
One thing I love about the such conference is that people have very different hardwares that I never able to see otherwise. I remember KDE booth had a Steam Value portable gaming board running KDE plasma. Then a person have this eyeglass which act as a monitor. Then the usual DJI drones but custom programmed. It was very lovely to meet and play around with exotic hardwares.
Kurian whom I met during a Debian packaging workshop in Kerala was a speaker at FOSSASIA. He presented a talk titled “OpenAI Whisper and it’s amazing power to do finetuning”. I was his unofficial PR guy, taking pictures :).
Clear sky, little hot and humid. The weather was quite nice for me except the surprise rain and small thunderstorms. Comparing to my place’s temperature, its wonderful.
RSS and atom feeds are new love for me, just like how a teenager newly discovers Instagram.
I discovered using RSS and atom feeds only some months ago, so the concept is very new to me. But, being immersed in Linux and related technologies, I quickly caught up using them.
(Indians: it's not that RSS you thinking about 🤪)
Though, I may have seen various minimal sites before, in my life. But I started appreciating their concept only recently. It mostly started with seeing bugswriter's YouTube channel and later his website. I started appreciating the concept of minimal utilities and programs. Minimal websites are mostly written with plain HTML and minimal CSS and JavaScript. They are lightweight, mostly adaptive and work goodly with almost any browser, even terminal browsers like Lynx, Links.
Although i used to read articles on Internet very much earlier; but my workflow for them was not very organised. Some of them came up when I searched for something on search engines, or some of them Google News suggested me.
Google News suggested a lot of normie articles to me, so I had to keep scrolling to find good articles. Finding articles to read through search engines was also limiting as I would be limiting my reading material to what I search only.
RSS and atom feeds provide a way to me check new articles on diverse websites in a single way. The website may be a minimal website, or a bloated website. Even YouTube and Reddit have hidden working RSS feeds. While Youtube or Reddit may not show you all the stuff you have subscribed to, feeds provide you all the entries without any algorithm. So you can subscribe to content that way and never miss some updates. Also this way, I don't need to use the bloated websites, or even check for my favourite stuff on multiple different websites, and can see them all in one place.
Coming to collaboration of minimal websites and feeds; Feeds are the best combination with minimal websites, as they themselves are minimal delivery mechanism for updates. However big or complex a site is, feeds are standardised and most sites would follow the same standards, either atom (RSSv1) or RSS (RSSv2) feeds. The standards are quite minimal and satisfactorily complete at the same time: providing basic entries like uploaded date, modified date, unique ID, author, language, brief description, external link, etc.
You need to use some specialised program to interact feeds. There can be multiple solutions for that:
Using feeds give me the same vibes as how an Indian teenager (or any other teenager) recently discovers vast world of social media, particularly Instagram. Though we may despise Instagram due to privacy issues and stuff, I still remember the first time I started using Instagram: it was a new kind of excitement, discovering something new on Internet. The same excitement I get through using feeds and minimal web. Whenever I come across some interesting profile (be it GitHub, LinkedIn, YouTube channel, Mastodon profile), I always search for a personal website. Many of them contain blogs and I always look for a feed link, and add it it immediately to miniflux . Whenever I get a new update, whenever I check my feeds daily in the Newsflash or Miniflutt app, I get excited to read the newly available article. Being master of what I read and control it fully is a liberating experience.
So all the Linux and tech people, please have a personal website (it need not be hosted on your own domain) and atleast write something. It would improve your writing, and communication skills, while also creating content for yourself, which would fill your website and make it look like atleast you have done somethinga and you know something in your field. In case you are looking for feed link from my website: it is https://hemish.net/posts/atom.xml
I wrote this to provide others an aggregated view to how to solve this problem, and not lurk in multiple formus or threads to find the solutions.
I have a Dell Latitude 5490 and since the time I had bought this, I had this problem, both in Windows and Linux distros: When I would click Sleep/Suspend it would try to suspend, but the power LED light will remain there and would freeze, and it wouldnt respond to keyboard or mouse actions, and the only way to wake it up was to hard power off it by holding the power button for 5 seconds and booting again. This was not good as it would often lead to minute data loss, or sometimes disk errors.
There was a simple workaround: to turn off the auto sleeping and just power off the laptop. But that defeated the whole purpose of having a laptop.
Another workaround I found out was that the sleep worked fine if it was initiated with administrator priveleges. At Windows' side, I enabled the Administrator user and set it as default working user. In Linux, I used to do sudo systemctl suspend
to suspend with administrative powers. It worked for most of times, but casually had the same problem also.
I lurked across various forums including Ubuntu forums, Arch forums, Reddit and found out a lot of solutions which I am all aggregating here.
To fix suspend not working on Dell Latitude 5490 and other laptops in the 54xx series, you should do all the following steps:
sudo usermod -a -G power $USER
Update your firmware using either CLI using fwupd or just using a frontend like GNOME Software Center.
Add these kernel parameters:
i915.enable_psr=0 mem_sleep_default=deep snd_hda_intel.dmic_detect=0 intel_idle.max_cstate=1 i915.enable_dc=0
I don't know which one of this does the trick, but putting all of them does no harm and suspend works gracefully and has not failed since I used them all.
If you are a technical user and know the internals, you already know how to use these kernel parameters, but here is a guide for those who don't know:
To add these kernel parameters in your bootloader in distros like Ubuntu, Fedora, you can follow these steps: Open the terminal and type:
sudo nano /etc/default/grub
Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT
and add your kernel parameters to it.
Save and exit by pressing Ctrl+X, then Y, then Enter.
Update GRUB entries by typing sudo update-grub
If you’re using a bootloader other than GRUB, you can add these kernel parameters by following similar steps specific to your bootloader. Like I use EFISTUB (which is infact not a bootloader, it is Linux Kernel loading itself into memory), I just greate new entries for my kernel.
I am sure that if you perform all these steps, suspending would work beautifully. Thanks, I hope this would help.
Today marks the end of my trip at MakeMyTrip. MakeMyTrip or MMT in short was my first full time job post college. As the cliché, I feel like yesterday when Roubal told me about the opening for Quality Assurance engineer at his workplace and I reluctantly, on his insistence, submitted my CV and went through the eventual interviews. To my great surprise, I passed all the time rounds and was an MMT employee, just eleven days out of college final exams. To be frank, I still say Roubal was more confident of me passing the interviews (which were three in total), than me :D. It happened by chance or was destined to happen all along, I’m not sure. I’m happy that MMT happened, which has given me many good friends, memories and experiences to cherish.
Cumulated to this day, after working here for 1 year, 10 months, 4 days, I would type “In office” for the last time. Now on, I wouldn’t get the opportunity to skip the incoming metro to grab an extra five minutes of book reading time on the station. No working in the massive and beautiful DLF Epitome tower, right next to Phase II metro station. No office lunch and post lunch walks at parking level 6 or parking level 5 or finally parking level 4 :P
Maybe the move was long time coming or the opportunity in hand aggrieved it or the fear of layoffs in tech sector or maybe my thought of not regretting trying and experimenting profile switch decades from now made me do it. MMT was the first company I started with as a full time employee right after college. It made me understand the corporate world, the way it functions. It made me realize that corporate is one large college project (albeit with folks with more experience) working towards the common project submission under different categories.
Letting folks know that I have taken the next opportunity, and we wouldn’t be seeing each other daily, was harder than even filing my notice. The feeling of telling them and seeing them sad was heart-wrenching. This kept on happening till today.
My story at MMT wouldn’t be complete without mention of a few folks who I look upto and/or loved hanging around. This whole thing wouldn’t happened lest Roubal hadn’t referred (and encouraged) me to the job when I was a bit helpless with direction I would take in terms of professional life. Being here helped me grow and find stuff I like and would like to pursue further and gave the financial stability along the way, so thanks man! I cherish your friendship a lot; since we met in first year of college 6 year back (man! We know each other since 2017, that’s long :D). I started with Desktop/PWA team in MMT, and Vidit helped me stand on my feet, patiently guiding me through how stuff work, within and without company. We eventually became roommates when we moved to Gurugram, and then I learned a whole different aspect from you on how to live independently. It was my first time living away from home and you being there helped ease the transition a lot. Next was Muskan, who I usually text starting with arey muskan. Muskan you’re a strong woman, aggressive and composed when required and fun to be to talk to. Never had any doubt around your abilities to handle situations when I usually fumble and didn’t know how to manage the situation. Our talks (in office and around) are always fun, and you have more potential than you realize. Also, you being the first person and know about my decisions to move forward with this opportunity. (man now that I’m writing, I’m feeling I have so much to say to folks).
Ab baat karte he Digi sir ki, Digivijay sir se sir bolna sikha :D. It feels good (not sure why) sometime using sir saluations with friends. Had hell lots of discussions, which I’m sure I couldn’t have with anyone else, with you. Fun ki paribasha ho sir aap. Learned how to enjoy life with you (and partying hard). We need to meet soon na. Then came Aditi, college junior from my own college. Now, she being two year younger always made me feel like an elder brother to her. How can we forget all the dance moments we shared. Going to miss those a lot. Only few folks can match my dance stamina, na :D. No doubt, you’re mature, and I have high hopes from you.
Next came my folks in Kitaabe. I never did share such passionate group around books ever, and the impromptu decision to form Kitaabe was for the best. Folks here became my go-to folks for arcades, movies, parties, trips and formost books; loads of book exchanges and discussions. Special shoutout to A Man Called Ove by Fredrik Backman, which started it all, which almost all of us devoured one after another by passing the copy.
Telling Aviral and Mohita that I’m leaving, was probably the hardest amongst the lot. You both know why. We started connecting more towards the end of my period here, which was sad becuase we could have way more fun if we had connected and known each other better, earlier. Aviral, I just love the conversation we share and I can’t thank you enough for all the food for thoughts and expanding my worldview, it’s been a while since I had these kinds of conversation with someone, questioning my own biases and thoughts on a deeper level, which I always feel the world lack on many levels. I’m probably the biggest fan of your writings and word play (in good sense). I couldn’t ever match the word usage and the beautiful sentences that came out of those words ever, even now when I have been doing this since almost three years on this blog itself. Looking forward to hearing your thoughts (in written), on your blog. Mohita ji, you are quite senior to us, about 47 years IG :P. Just kidding, 78 is the right age gap :D. But we had lots of fun teasing you on this. You were a guiding person in the chaos(es) we faced, which we couldn’t even comprehend, thanks a lot for this. I’m always amazed at your knowledge, speaking truth to superiors and hunger to learn and grow.
In later part of stint here, I shifted to Kartikeya boi’s team when my notice period started and boi o boi, he’s one hell of an optimistic and humorous person, inserting a punchline everywhere make the even serious conversation sound like fun little banters. And Ankur, the person who never says no to Gol Gappa outings (though spicy Gol Gappas are always better, sweet bad), someone matching my Gol Gappa appetite. Don’t know why our Metro, public transport and civic infrastructure talks keep coming to my mind while writing about you (which too I can talk to very few folks about). Lastly Shivam sir, we shared less time together due to your marriage and then my eventual team change, but your experience and strong opinion for how things should be around workplace helped me orient stuff.
A TED talk, I listened to a while back, by a Harvard researcher on the secret to long life is human relationships. Being on notice period and realizing the connect I shared with folks genuinely made a realize how happy they made me feel. I cherish those relationships, no longer co-workers but friends. All in all, I would hope for the best for all of you and only suggest to experiment. I have learned how monotonous life is, and how we can radically change it with experimentation. Be the change. That is my two cents for you all.
In conclusion, henceforth whenever I’ll pass the Epitome tower or someone mentions MMT, I’ll have the good feeling that I use to work there. Would I miss my collegues, no! because those who matter would still be in touch. I would miss the environment, the times together with those mates and everything that came out of those moments.
PS: The sticker on my Mac isn’t from an anime, it’s the logo for Debian Project, which I’m associated with since a while.
I was standing on an elevated metro platform, looking at high rise office buildings. People were walking in the small park nearby, few were working in the office, unbeknown to the one event that I have been actively looking forward to. These high rises office spaces and people returning from work, again made me realize how small I am. The thought that I’m becoming a Debian Developer (non-uploading) and none of the folks I’m looking at would ever know, wouldn’t care, was a humbling feeling.
Starting with series of broken Pop!_OS and Ubuntu OS installations to one month long hit and trials to get my Debian installation right, to DebConf20 happening online, to India winning the bid to host DebConf, meeting the Debian India community, to a bunch of other events finally lead to me becoming a DD.
Back in 2020, I had dreamed of becoming a Debian Developer before DebConf23 Kochi. That squarely gave me three years for becoming a DD. After almost three somewhat not so successful attempts at packaging, I had almost forgotten about that dream of becoming a DD and just went about helping in Debian conferences.
Initially, I wanted to grab sahil@debian.org because I’m fascinated with emails, and this was one heck of a cool email address to give around. I was also excited about getting a LWN.net subscription as an added benefit to DDs. No one covers Linux and Free Software news better than LWN.net. Though on later discussions, I got to know that outgoing email delivery on @debian.org isn’t good and LWN.net subscription benefit was trimmed down.
Looking back, I’m grateful to all the people who make Debian and the surrounding community happen. They have impacted many lives (including mine) greatly positive direction. If Debian wasn’t there, I wouldn’t have got the opportunity to meet so many interesting folks and my enthusiasm to attend all these events. Debian is one of the few tech communities where I belonged :)
Going forward, now I would have a slightly bigger part to play in Debian (and with complete access to -private to keep me company ;)) I’ll continue volunteering for various community and technical activities as usual, as becoming a DD changes nothing. Tomorrow again, the sun would rise. I’ll wake up, get ready, go to work, see people and nothing would change. But there would be an inner joy that finally I have become a DD indeed :)
I would like to conclude with a few lines from Robert Frost’s Stopping by Woods on a Snowy Evening
’..The woods are lovely, dark and deep,
But I have promises to keep,
And miles to go before I sleep,
And miles to go before I sleep.'
PS - Now you can mail me at sahil@debian.org
PPS - Just looked at db.debian.org today, and it seems I’m one of the fourteen DDs in India. Cool!
My hunt for a plain markdown capable Notes app is stopped. Why was I searching a good notes app to fit my workflow even though there are countless offline and online notes available?
There are a lot of notes app available out there, with syncing capabilities. But, what are the problems with them?
Most of notes services have vendor lock in. If I use something like OneNote, Simplenote, Evernote; I am bound to use their service through their apps and website and services. There is no way I can use the software I like for notes.
There are some good offline note applications available, but they can't sync. I am a great sucker for syncing capabilities. If I change something on my mobile, I want to be be reflected back on my laptop, and also backed up on cloud. While online services come with syncing, they have vendor lock in, as discussed in previous point.
There are really no standards for a notes app. All the apps and services just implement their own wheel, thus leveraging vendor lock in and making moving between services hard. It is a recent phenomenon that some notes services provide you a way to write your notes in markdown format, but they still their own methods to store that data in the databases, thus again vendor lock in. There are some apps I found which work great and just store notes in markdown, but still they use some metadata with markdown, some use +++
to start and stop metadata tags, some use ---
and some just append tags in this metadata, some just use folder categories.
While most notes app on Android and IOS work pretty natively, a lot of good apps on Desktop side are really just electron apps. Though I am not opposed to electron, I am not a fan of it. I am ok with running 1 or 2 apps with Electron like VSCodium, but everything in Electron, hell no!
The most easiest solution to vendor lock in is to make notes in a standard format, like markdown.
The most easiest solution for syncing is to just use markdown, and use Syncthing to sync the notes between devices, you may even use some self hosted cloud server, or any other cloud solution, in my case I use MEGA until i start earning and get my own self hosted server.
It would be good to not use any specific or niche features. Just use plain markdown things like links, bullets, headings. That's pretty much it. For categorising the notes, just organise them into different directories.
The only good solutions I found are QOwnNotes and Paper. QOwnNotes is made in Qt while Paper is made in GTK4/Libadwaita. Although these apps may use some of their own data in the folder of your notes, you may just ignore syncing it. Like Paper generates a .trash
folder, just don't use trash in Paper and you would be good to go. QOwnNotes may store markdown metadata or make a sqlite database to store metadata, but they still use plain markdown for storing notes directly into filesystem tree. Just don't use specific features like tags and use directories to cagtegorise your notes and you are good to go. So, currently I use Paper, for the sake it is good to look at, and is adaptive (this may help if I start using a Linux Mobile phone some day; or it is also helpful when using a tiling window manager).
But what for Android? The most worth I have found are Epsilon Notes and Obsidian. Both store the notes in a plain markdown format in the filesystem. Though Obsidian saves tags in markdown metadata, you can just get away with it by not using tags and just using directory based categories. If you do not use tags, it just does not append any metadata. You can optionally include date in markdown metadata, but if you do not use it, it just works like that.
<insert app>
Just because I am very specific in choosing stuff for my usecases, I can't even tolerate something missing that I want. I am a sucker for native GUI apps, plus apps storing data in plain formats, that are easily grep
able or manipulatable via standard command line tools. Syncing is an important aspect for me, as I keep switching between my phone and laptop and want access to my data at the instant.
So, I spent a lot of time and installed and uninstalled a lot of apps, and I have finally found the tools which work for me. Thanks for reading my rant.
The first MiniDebConf of year 2023 - MiniDebConf Tamil Nadu 2023 or MDC TN23 in short, happened on 28th and 29th of January 2023 in University College of Engineering Viluppuram in Tamil Nadu. It was done with the local GNU/Linux user group - Villupuram GLUG.
Officially speaking, MDC TN23 was announced at the closing ceremony for MDC Palakkad 2022 in November of 2022 by VGLUG. I got to know from Praveen that the conversation around this started during VGLUG’s Software Freedom Day celebration in September, which Praveen and Sruthi attended. Post MDC Palakkad, Praveen invited all the interested folks to join in organizing MDC TN23 with VGLUG team. The prepartions started and VGLUG team, being organized and experienced in event management, did most of the work with inputs from Debian India folks. There were some gaps in communications and disagreements like on budgeting but after many meetings, we came to an agreeable enough budget (and got the requested budget approved from DPL/Debian).
As usual I asked Ravi if he planned to attend, to which he said yes. The journey started at New Delhi airport. The flight was to take off at 7.40 AM and I had planned to reach the airport by 5 AM, due to long security and check-in times. I was seriously doubting myself getting up at 4 AM, getting in the taxi by 4:30 and reaching airport by 5 AM, but I did indeed made it. I met Ravi at the airport, and we flew down to Chennai airport. We then took the Chennai Metro and met Nilesh with whom we took a train to Villupuram where a cab was waiting to take us to our accommodation. There we met rest of the Debian India team and other speakers.
The opening ceremony took place in a marriage hall and guest of honor was the Member of Parliament (MP) from Villupuram, Dr D Ravikumar, which was a first for us followed by the events at University College of Engineering. This time, I had only submitted a DebConf23 introductory talk and took the time to enjoy the conference rather than worrying about my talk and stuff. I had planned to set up my mail server and update poddery.com’s Synapse server update with team, but didn’t got the time to do any. The number of interesting conversations, meeting friends and fellow community members, and interesting events kept me engaged. Though I did complete my MDC Palakkad blog post and setting up Onion URL for Debian Fasttrack.
As usual, we had DebConf23 meetings on the sideline. This time, international DebConf team including Nattie, Gwolf and Tzafir too joined for the conf, as the team planned to do a DebConf23 venue visit in Kochi post conf. Also meet them for the first time due to this.
During the visit, I tried a variety of Tamil foods and snacks, many names I can’t even pronounce properly. I had a hearty serving of filter coffee and even started to enjoy Rasam. However, I didn’t like the taste of ghee masala dosa. I also visited Ratna Cafe and Sangeetha Cafe in Chennai, and my desire to try local food was fully satisfied during this visit.
My Debian Developer, non uploading AM process was already completed by 15th, so had hopped that the DD process would have completed by the time of the conf, but that didn’t happen. The FD or DAM approval step was still pending. Now the next conf would become my first conf post becoming a DD :D.
The event was also covered by New Indian Express, though they got many of the details wrong. Here’s the link to the news article.
Hoping that MiniDebConf Tamil Nadu would becomes an annual thing, as we get to meet, learn, chill and re-visit Tamil Nadu every year.
It all started with a discussion that we’ll meet IRL if we have a physical Debian event. A few texts exchanged - we got a venue, we had an organization team, and we were set for an event. This was how MiniDebConf Palakkad 2022 which happened on November 12 to November 13, in NSS College of Engineering Palakkad, Kerala came to be.
To give you folks a bit of context, I have been active in Debian and Debian India since DebConf20, which was around the time when I started using Debian full time on my machine. Debian been such a welcoming community, and DebConf22 Kochi (which has since changed to DebConf23 Kochi) preparations ongoing, pulled me in for more participating and volunteering for more Debian events since. Due to Covid-19 pandemic, DebConf22 Kosovo was probably the first physical Debian event. As official host of next DebConf, the Debian India had to send a delegation to attend and receive Pollito. I wanted to attend this conference and meet folks I was working, interacting and gossiping with since I started on Debian, as we hadn’t met IRL ever. But alas! Due to work, I had to skip attending the conf. Also, all the attendees kept saying, I should have attended it, it was so much fun. Now during once such conversation among the MDC Shoutout team (if you know, you know ;)), we were discussing this and that’s when the idea of a Debian event came to be. A few texts by Abbyck to his alma mater club at FOSSNSS, and soon we had them onboard for venue and event. From there it all came to be.
Now, coming up with a two day, two track Debian event is never easy. Call for proposals needs to be sent, website has to be done, registrations has to be taken care of, money needs to be raised, publicity has to be done and then on the venue, projection and sound system is to be setup, attendees need food and accommodation and all the nitty-gritty has to be planned, arranged and executed for the conference to happen. Discussions started around September and conference dates were fixed mid-November. Teams, consisting mostly of FOSSNSS and Debian India folks, started work under guidance from Anupa. (She’s usually the one doing most stuff, never ever taking the credit for.) All the folks did loads of work, multiple meetings were conducted and on ground stuff was done.
As Ravi too planned to join the conference, we booked the tickets to Coimbatore, from there we planned to travel by road to Palakkad. After landing in Coimbatore, rain was the last thing I was expecting in the month of November. Later on, I came to know it’s always rainy here in South India for half the year. As I hadn’t anticipated rain, I was wearing white shoes, and it felt real bad seeing them catch all the mud and dirt. Coming here was also a cultural shock for me. Since childhood, I had never travelled to south India or any place where Hindi wasn’t a commonly understood or spoken language. Now, here I was in Tamil Nadu with no understanding of Tamil. It wasn’t very hard per se, but me being in a bit of a cultural hocked state elevated the hardship. Being with Ravi, who has travelled to places, helped. He knew how to navigate unknown areas. Initially, the plan was to travel in cab via Uber, but that plan fell apart due to price differences. We finally travelled in KSRTC i.e. Kerala State Road Transport Corporation bus, much to the delight of Ravi. The accommodation for was arranged by local team and I got a hotel named KTDC Garden House near the scenic Malampuzha Dam. It was a scenic place, but with limited food options. Malampuzha Dam was beautiful but due to time constraint I couldn’t explore it fully with rope way and activities near it.
This was my first in-person Debian event, so was excited to attend it. On the day of conf, I meet many folks to which I was talking to since a very long time in-person. I also got to know many new folks. In total, I had proposed two Birds of Feathers (BoF), one short talk and a workshop, of which the workshop didn’t happen as we (Abbyck and me) didn’t get the time to sit together and prepare it. Coming to the BoFs, the DebConf India BoF happened on Day 1. It was intended to introduce folks to DebConf23 and get them to join the team (though we weren’t too successful on that part). Various teams and members were introduced, and work status was discussed. The second BoF was Debian India BoF to discuss direction for Debian community in India. Various issues and suggestions were discussed. Moderating this BoF, was more difficult than I had anticipated, but that’s how things are. They don’t always turn out as we expect them to.
I had a short talk too on the topic, Mobile OpenStreetMap mapping on the go. I had a few screenshots and outdoor pictures collected, which I planned to incorporate in my talk. I cobbled together all those pictures and finally completed my presentation just in nick of time for the talk. The idea was to present all the use cases and different mobile applications to map those in OpenStreetMap. (I plan to do a write-up for the talk, but let’s see if it comes around). The presentation went well, though no-one ask any question in QnA.
For final submission, the workshop on self-hosting with Abbyck, we decided to cancel due to time constraints.
As being the tradition with Debian events, GPG key signing was part of the event. Ravi gave an introduction on it and Pirate Praveen dived into details how it’s done in his talk. It was a fun little activity where people came with their government IDs (some had their pictures from when they were teens) and key signature written or printed somewhere, and we cross verified it and then signed their key. I got my keys signed by Praveen and Anupa, both DDs to meet the criteria to start the DD process. Post that, I started my non uploading DD application. Nilesh, another DD had mentioned that he might be my Account Manager (AM) as his AM slot was empty, and by chance, he was chosen to be my AM, the role he has taken quite seriously since then. The AM process is complete now. I’m awaiting final step of Front desk or DAM approval, post which I’ll officially become a Debian Developer, non uploading.
I was meeting the whole DebConf23 Kochi organizing team for the first time (mostly I was the one coming from far, most of the team lives in and around Kerala). The whole organizing team was housed in KTDC to allow discussions in person to happen, which were held on the night of 12th. In person discussions were way more productive as we were able to discuss and deliberate on DC issues faster.
Finally, the event was also graced by the presence of DebConf mascot Pollito which is in India for DebConf23. Raju brought the Pollito on the last day of the conf, and it was an instant celebrity. Raju was mobbed by the crowd for getting a picture with it, and it was also given it’s MDC ID as well. I, too, saw and held Pollito for the first time.
In conf, everyone agreed that we needed more decentralized, in-person events in various parts of India; various nascent plans were made to conduct MDCs in Villupuram, Bengaluru, Hyderabad and Pune. For now, the Villupurm one is happening thanks to active local community VGLUG, titled MiniDebConf Tamil Nadu 2023. Another MDC might take place before DebConf23, Kochi. I hope starting with these MDC, at least the Palakkad and Villupuram MDCs become annual affairs. Fingers crossed on that. If you want to organize an MiniDebConf in your city, get in contact, and we’ll together figure something out.
Finally, I would like to thanks all the folks who came together to make this conference a success, and also gave me the opportunity to meet everyone.
PS - Finally, shoutout to Abbyck!
While GNOME is moving to Libadwaita with the help of Purism (sure they need to be acknowledged), themeing has surely lessened, but people still use some hacks.
Some background information:
((Whenever I search for any new package or install any new packages on my system, I generally use yay -Ss <packagename>
or just I use yay -Ss <packagename>
and then press TAB and bash-completuon provides me the matching package names. By doing this, I am able to get aware about any patched package, that might give me some extra functionality or something else.
I generally don't theme my system and use the default theme for Libadwaita apps. For GTK3 apps, I use adw-gtk3 to match them with the Libadwaita apps. Though sometimes I do switch temporarily to WhiteSur theme by vinceluice (which also provides Libadwaita css) because why not someone would like that glossy theme. But, I keep coming back to the default theme.))
So, long ago I came across this package called libadwaita-without-adwaita on AUR (Arch User Repository). As I mentioned I don't care about themeing not available as I use the default theme generally, but still this libadwaita-without-adwaita grabbed my attention.
So, the maintainer of this AUR package (named ich) has provided a patch called themeing_patch.diff
which patches the specific code which pins the libadwaita-provided Adwaita theme. So the patched package uses the theme from Gsettings (or dconf, whatever you call it) key gtk-theme-name
and also follows gtk-application-prefer-dark-theme
to use the theme set by the desktop environments. Someone also packaged it for RPM based distros. (See this) I appreciate the maintainer that they specifically mention that this is experimental and people should not report bugs to Libadwaita apps while using this package. (Also note that any themes used should support Libadwaita widgets in their main css theme for this to work correctly)
So, this made me think that if this can be patched so easily without adding any new files or changing a hell lotta lines of code, people can experiment with it to provide their own libadwaita package which the apps would use, but it would not pin the new Adwaita theme. I would say this should not be done in any mainstream distros for the sake that we should not break the userspace, but surely we can try something like providing a custom libadwaita package, where some distribution may pin their own theme into this library or they just provide the user a way to just modify in their settings. This patched package may be kept into official repos and voila you have themed apps. I am not saying this should be done to just mitigate the themeing issue, but this can be shown as a proof of concept or just a temporary workaround till GNOME is ready with their themeing API.
17 Dec 2022
This guide is a continuation from the lvm/luks installation guide.
After a vanilla slackware installation the only user present would be the root user. Using a normal user for our daily needs is ideal. Create one with
# useradd -m -g users -G wheel,floppy,audio,video,cdrom,plugdev,power,netdev,lp,scanner -s /bin/zsh jim
This will create a user named jim
whose default shell would be zsh
and will be a member of the wheel
group that can run sudo commands. Now we setup jim's password
# passwd jim
Next, we setup sudo so that member's of the wheel group can run any command
# visudo
In the file that opens up, make sure the following line is uncommented
## Uncomment to allow members of group wheel to execute any command %wheel ALL=(ALL:ALL) ALL
Before we continue further, lets add a few lines to configure root
's vim, if you plan on using vim to edit config files. Open ~/.vimrc
and add
set nocompatible " vim tries to emulate old vi, this tells it not to filetype plugin indent on " Load plugins according to detected filetype. syntax on " Enable syntax highlighting. set laststatus =2 " Always show statusline. set display =lastline " Show as much as possible of the last line. " dont litter the current folder with backup files set backup set backupdir =$HOME/.vim/files/backup/ set backupext =-vimbackup set backupskip = set directory =$HOME/.vim/files/swap// set updatecount =100 set undofile set undodir =$HOME/.vim/files/undo/ set viminfo ='100,n$HOME/.vim/files/info/viminfo
slackpkg
is slackware's default package manager. Everything thats in the slackware DVD is the entire official slackware software repo. So this is where the community steps in and contributes extra packages.
Before we get into that, we need to blacklist a few packages, that we ignored during installation.
Open /etc/slackpkg/blacklist
in vim
kernel-generic.* kernel-huge.* kernel-modules.* kernel-source kde/
Uncomment the kernel lines. This will prevent automatic update of a running kernel. The kde/
entry prevents installing kde packages. Now we tell slackpkg
which mirror we wish to use.
Open /etc/slackpkg/mirrors
and uncomment a mirror closest to you. Take care to uncomment only one! The file is heavily commented, so go ahead and start stroking your unix beard :)
... #---------------------------------------------------------------- # Slackware64-15.0 #---------------------------------------------------------------- ... # UNITED KINGDOM (UK) http://slackware.uk/slackware/slackware64-15.0/ ....
Slackware 15 is the current stable version of slackware as of this writing and since its the version that we've installed, choose a mirror for it and not Slackware64-current
. current
is the development branch of slackware.
Lets update our package list
# slackpkg update gpg # slackpkg update
This sets up slackpkg
to use the new mirror and update its local package database. Take care to run slackpkg update
often when you learn of updates. Read more about slackpkg from its official docs
To update your system from time-to-time, run
# slackpkg update # slackpkg install-new # slackpkg upgrade-all
If it found any updates, go through the list and install what you need. Take care to read any post install notes if any are presented.
Feel free to read the slackware beginners guide.
Log out from root and login with your normal user.
Edit ~/.zprofile
and add
EDITOR=vim LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LC_ALL=en_US.UTF-8 PAGER='less -R' MANPAGER="$PAGER" export EDITOR PAGER MANPAGER LANG LC_CTYPE LC_ALL [ -d $HOME/bin ] && export PATH="${HOME}/bin:${PATH}" # add su paths export PATH="${PATH}:/usr/local/sbin:/usr/sbin:/sbin" # long date format in ls export TIME_STYLE=long-iso
These are some niceties to make our $SHELL
life comfy. Logout and log back in.
Start your GUI with
$ startx
Xfce will start up. Open a terminal and do the same vim config update for your normal user as we did for root. On top of that lets configure a tool called tmux
. Open ~/.tmux.conf
and add
# Index starts from 1 set-option -g base-index 1 set-option -g pane-base-index 1 # Renumber windows when a window is closed set-option -g renumber-windows on # no login shell set -g default-command "${SHELL}" # 256-color terminal set -g default-terminal "tmux-256color" # use 256 colors instead of 16 # Add truecolor support (tmux info | grep Tc) set-option -ga terminal-overrides ",xterm-256color:Tc" # Mouse set-option -g mouse on # Reload ~/.tmux.conf bind-key R source-file ~/.tmux.conf \; display-message "Reloaded!"
Add an alias to launch tmux from your shell
$ echo "alias t='tmux -2 -u' >> ~/.zshrc"
Now, everytime we need to launch tmux, just type t
and hit enter.
What I like to do when updating my system is - exit X and launch tmux from the tty - Then I switch to root using sudo su -
- then i do slackpkg update
and go from there.
When using slackware, we'll be spending most of our time on the command-line, so its a good investment to learn it. Bookmark the Unix Grymoire and get cracking.
Xfce is a very capable desktop environment. Almost everything one needs to know is mentioned on its archwiki entry.
slackpkg+ is a script that extends slackpkg to pull in software from 3rd party repos.
# upgradepkg --install-new slackpkg+-1.8.0-noarch-6mt.txz
Now, we'll have a new file of interest /etc/slackpkg/slackpkgplus.conf
Make sure the following lines are updated accordingly. Some of them would be commented. Take a moment to read the file. TLDR; setup the repos, if 2 repos have the same packages, which takes priority? these are addressed in this file.
# if you plan on using wine or steam PKGS_PRIORITY=( multilib ) MIRRORPLUS['multilib']=https://slackware.nl/people/alien/multilib/15.0/ REPOPLUS=( slackpkgplus ) MIRRORPLUS['slackpkgplus']=https://slakfinder.org/slackpkg+15/
At the end of the file, we can find a few repos that are given as examples and which could be used.
After we configure slackpkgplus,
# slackpkg update gpg # slackpkg update
You can choose to skip installing multilib, but I use a few programs via wine. So I will install it. OldTechBloke has a tonne of incredible videos on slackware and how to setup a system. Do give it a watch, if you wish to see what the outcome of this is going to be.
# slackpkg update # slackpkg upgrade multilib # slackpkg install multilib
sbopkg is a tool to download and install packages from SlackBuilds. Slackbuilds is to slackware is what the AUR is for archlinux. Download sbopkg and install it with
# upgradepkg --install-new sbopkg-0.38.2-noarch-1_wsr.tgz
Make sure to blacklist sbopkg and all packages from slackbuilds in slackpkg's blacklist
file.
kernel-generic.* kernel-huge.* kernel-modules.* kernel-source # This one will blacklist all SBo packages: [0-9]+_SBo # for alienbob's packages #[0-9]+alien # no kde kde/ # sbopkg sbopkg-0.38.2-noarch-1_wsr
sbopkg
will download the source package from upstream for a package, compile + builds a slackware package and then installs it.
First thing to do would be to update sbopkg's local database
# sbopkg -r
Then open it up
# sbopkg
One gotcha here is that it does not handle dependencies. Every slackbuild script README has a variable called REQUIRES=
which has a list of dependency packages that you can look into installing before you install the main package.
There's another fabulous tool or set of tools called sbotools
that's available on slackbuilds that gives a bit more flexibility when installing from slackbuilds.
OTB has a wonderful video of this process.
TLDR;
# sbosnap fetch # sbofind packagename # sboinstall packagename
The slackware docs has more info on sbopkg. Do give it a read.
Happy Hacking & have a great day!
Couple of weeks back I installed PostmarketOS on my idle phone Leeco Le 1s , which was paper weight for some time now.
It all started with a roadtrip to Pondicherry (I will soon write about this trip). As I was sitting on the front seat where Praveen’s Librem 5 kept charing on the car dashboard. And we had a small discussion about PostmarketOS and how much new ports are available now.
My idle phone came to my mind. After reaching home I started setting up porting pmOS to this device. Going through pmOS website, to my surprise there is already a port for this device.
The OEM unlock is quite easy even though a little hiccup at begin (I suspect it is solely of my cable). The Xiaomi users knows the pain of unlocking bootloader.
With the pmOS community preferred practice of using pmbootstrap
, I
built image for my device, flashed it. And phone boot stopped with
pmOS logo. I thought I went to bootloop. I tried sxmo, xfce4, everything same
nothing happening after boot logo splash.
The pmOS troubleshooting wiki is quite good. They have documented most issues. Though screen is stuck I can still ssh to the phone. From wiki I came to know its a screen refresh problem. I installed msm-fb-refresher package and ran it as a daemon.
Voila, I have a mate desktop.
Mate desktop is not at all touch friendly even though I tried to scaled up to read and tap things on the screen. Wiki suggest xfce4 is little more touch friendly. I started moving to xfce. Then again same problem, stuck at boot loop.
This time it was with lightdm, I turned off CanGraphical issue warning and now I have xfce desktop. pmos xfce seems great comparing with mate.
Nothing works as of now from a Mobile phone point of view. The phone maintainer says the battery is working, but I couldn’t get it working. Its always in battery mode and 50% status.
I thought the hardware buttons will never work. But with xev
, I can
see the hardware key event triggering.
I have Librem 5 phone and I use XMPP for communications. I was using Dino, but a few days ago, I learnt that Dino restricts key sharing to contacts. That means, in encrypted groups, I was not able to decrypt messages sent by people not in my contacts. Other Dino users also reported similar problems. The solution was to add them to my contacts, but that is impractical in a large encrypted group. So, I couldn’t use Dino unless this policy was removed. The other option was Gajim, which did not have support for small screen devices, such as my Librem 5, and Gajim developers were not interested in UI that works on both mobiles and desktop.
I decided to fix the Dino key publishing issue myself. Going through the Dino’s repository, I found an earlier commit which allowed key exchange with everyone, and not just with contacts. I just had to change one line to mimic that commit, but the way Dino works changed since then, and I could not build it successfully.
I met Abraham Raji at Minidebconf 22 Palakkad, who looked at the code and figured out the way to fix it– it was still a single line fix.
I didn’t want to use an older version of Dino for this feature, as the new version introduced another important feature– showing group history. I needed GTK 4 version for this feature, which was not in PureOS, the operating system I was running in my Librem 5. To fix this, I switched to mobian(which is debian for mobiles). This finally gave the desired set up, although I had to give up on disk encryption.
By the above example, I would like to emphasize the freedoms that free/swatantra software gives and its importance to everyone. Think of a proprietary app, like WhatsApp. If I wanted to run it as I wish, it would not have been possible. Only WhatsApp developers could have fixed that, and as you see in this case, the Dino developers weren’t interested in fixing my issue. They just recommended me to add every person in the group as contact. Note that even if you don’t know how to fix the above problem, you can run my modified version of Dino. Even though I could not fix it myself, I was able to take help from someone who knew programming. This means we can collectively crowdfund and pay someone to fix the problem even if we are not able to fix things ourselves.
When chatting on Krita IRC deevad mentioned that he uses a script to automatically download Krita nightly update and manage the launcher icons etc. So I thought that it is a cool idea to use script to automate the update process. Usually I used to click on the update button and then manually adjust the filename of the Appimage etc. I rename the files of appimage to stable
and nightly
respectively so that it becomes each to point them to a shortcut and there is no need to change the desktop shortcut file always. I already have two different desktop files to launch the nightly and stable Appimage version of Krita. The update done from the Krita’s welcome screen downloads a file with new file name and we have to either adjust the name in desktop file or rename the downloaded file.
When searching for a solution to automate this I found out about appimageupdatetool I believe this is the same tool that Krita uses in back-end to fetch the update. This tool has a GUI version and also a CLI version. I downloaded the CLI version which is named appimageupdatetool.
wget https://github.com/AppImage/AppImageUpdate/releases/download/continuous/appimageupdatetool-x86_64.AppImage -O ~/.local/bin/appimageupdatetool
The above command downloads the binary file of the tool to my computer’s local/bin folder. Make sure you have ~/.local/bin folder in your $PATH variable. I then mark it as executable with the following command.
chmod +x ~/.local/bin/appimageupdatetool
One of the best advantages of using this tool is that it has the option to self update itself and also update the appimage file in place having the same name that you have given the appimage.
So updating an appimage is this command
appimageupdatetool -rO ~/store/Krita/nightly/krita-nightly
The -r option removes old appimage file and -O option overwrites the old file. It overwrites it by default but also makes a backup file with .old filename so to remove that file we pass the -r option. I have renamed the appimage to “krita-nightly” and my desktop file placed in .local/share/applications/
targets this file. Earlier when a new appimage was downloaded, it had different name on each update I would need to rename the appimage or edit the desktop shortcut file with new name.
Below is content of my desktop file called – org.kde.krita-appimage-nightly.desktop
[Desktop Entry]
Categories=Qt;KDE;Graphics;2DGraphics;RasterGraphics;
Comment=Digital Painting
Exec=/home/raghu/store/Krita/nightly/krita-nightly
GenericName=Digital Painting
Icon=/mnt/attic/krita-build/krita/krita/pics/branding/Next/512-apps-krita.png
MimeType=image/openraster;application/x-krita;
Name=Krita-nightly
StartupNotify=true
StartupWMClass=krita
Terminal=false
X-KDE-NativeMimeType=application/x-krita
I have used a different Krita icon with git symbol from Krita repository that I have cloned on my hard disk. I just supplied the path in the Icon option. The Exec
option determines the target binary that needs to be launched when you click this desktop file. I gave it the patch where I store the nightly appimage. Now onto the automation and scripting part
I used my limited knowledge of bash and stack exchange search skills to craft this bash script, which will be run everyday to update Krita using systemd timer.
#!/usr/bin/env bash
# Krita nightly update
# A script to update appimage of Krita
# License: CC0
# we first check if the appimagetool itself is up-to-date.
# Rather than checking daily I check only after 25th of every month.
# I could also just check if there is any update with the tool itself but I felt lazy and did this.
cur_day=$(date +%d) # gets current day from system and assigns it to a variable.
# check for the update of appimageupdatetool itself on 25th of every month
if [ ${cur_day} -ge 25 ];
then
echo "Checking and updating of appimageupdatetool"
~/.local/bin/appimageupdatetool -rO --self-update # command to self update the tool
else
echo "skipping update of appimageupdatetool since it is not 25th yet"
fi
update1=$? # get the exit code assigned to a variable
# updating Krita
echo "Now checking and updating the Krita nightly"
~/.local/bin/appimageupdatetool -rO ~/store/Krita/nightly/krita-nightly # command to update Krita appimage
update2=$? # get the exit code assigned to a variable
# use highest exit code as global exit code
global_exit=$(( update1 > update2 ? update1 : update2 ))
# I then send a notification popup on the desktop to let myself know of the result or error.
# This requires a utility called notify-send which is present in most linux repositories.
if [ ${global_exit} -eq 0 ];
then
notify-send -a 'Krita update' -h "string:desktop-entry:org.kde.krita" -u normal 'Krita update done' 'No errors & update complete' --icon=face-smile
fi
if [ ${global_exit} -gt 0 ];
then
notify-send -a 'Krita update' -h "string:desktop-entry:org.kde.krita" -u critical 'Krita update failed' 'check the system journal' --icon=face-sad
fi
exit ${global_exit}
This is a crude script and might be improvised or enhanced. If you have any better method or way of writing it or have any extra additions please suggest in the comments. I might change it later when I have lots of procrastination time on hand.
Now comes the part where this script is run daily and automated. Earlier I used to use cron but systemd timers are efficient and handy plus I do not need to install cron as systemd comes with the distro already.
I have enabled something called lingering, this helps in running user services without a session although here it won’t be needed I think but the command to enable it is loginctl enable-linger username
and then logout and login again.
A user can run a systemd service and the service file need not be in the system directories. There are two files required. A service file which determines what to do and a timer file which determines when to do it. Both files are stored in ~/.config/systemd/user/
folder.
I have a krita-nightly.service file in the above folder which says systemd to run the bash script. The content of the file is:
[Unit]
Description=Krita nightly update
Wants=krita-nightly.timer
[Service]
Type=simple
ExecStart=/home/raghu/.local/bin/krita-nightly.sh
The ExecStart=/home/raghu/.local/bin/krita-nightly.sh
gives the path of the script to execute.
Now the timer file has the following content and is stored in the same folder mentioned above.
[Unit]
Description=Krita nightly update timer
After=plasma-plasmashell.service # runs after my desktop session is started. use whatever relevant to your desktop environment.
Requires=krita-nightly.service
[Timer]
OnCalendar=daily # runs the script daily at 24:00 midnight
RandomizedDelaySec=10min # adding a random delay so that this doesn't clash with anything other
Persistent=true # if my PC is not on at that time this will make sure it is run when I boot next time.
[Install]
WantedBy=graphical-session.target # runs only when I have a GUI session
After I place the two files in the folder, I run the following command to enable and start the timer.
systemctl --user enable --now krita-nightly.timer
This will enable the timer which in turn will run the updating script only for my user account. If the update goes through I get a nice notification on the desktop
if not then the notification says to check the logs for error. Hope this helps someone who wants to automate the process.
Note: I have nightly and stable builds in separate folder to not hinder the resource files of each version. I also have two desktop launchers for nightly and stable.
If you find this post helpful and want to buy me a coffee for sharing it just check out my page on Kofi or donate through my PayPal page. Or you can also buy my sunset painting as a print from here.
Want to upgrade your PC or laptop's firmware but don't want to use Windows?
Fwupd is a command line program, which has the ability to fetch firmware updates from vendors (including proprietary vendors like Dell) and apply the updates.
For me, it worked seamlessly with my Dell Latitude 5490 and a HGST brand hard drive.
You have to just enable the fwupd.service
through systemd, and then you would be able to operate the application using the command line.
There are some GUI front ends available for the same application like gnome-firmware. Even Ubuntu is developing a flutter based frontend for it. But, for me, command line works best.
It's just wonderful to see that we have reached a point, where every other thing doesn't require Windows, and we can daily drive Linux on our laptops and computers. That's the power of opensource community.
24 October 2022
Slackware 15 got released this year. It has been a very long time since I last messed around with it. Lets install it on our primary machine :)
LUKS - Linux Unified Key Setup is a disk encryption spec and LVM - Logical Volume Manager is a software abstraction layer on top of partitions/disks that make them easier to manage. I'm going to install slackware on my laptop and encrypting our disk makes sense as there's a good chance the machine could get stolen.
I'll be using an unencrypted /boot
partition, as the default bootloader in slackware - LILO does not support booting a LUKS container. There are plenty of tutorials on setting up grub2
for this purpose. SlackerNET UK has an amazing video guide on this.
Booting the live media will bring us to this familiar screen where we are asked which kernel we want to boot. Hit Enter!
We are then asked to choose our keyboard layout. We may choose, but we'll hit Enter!
Now we login as the user root
. Type in root
and hit Enter!
After which we are greeted by the shell
My machine's disk is /dev/sda
. To find yours, try running
# fdisk -l # or # lsblk
Assuming my disk is 40 GB
, our plan is to allocate the first 1 GB
to the boot partition and then use the rest for the main linux partition.
# cfdisk /dev/sda
sda1
with size 1GB
. Of type Linux
and then mark it as Bootable
.sda2
with the rest. Its type is also Linux
. Confirm just once again to see if our disks are correct with
# fdisk -l
In our case, /dev/sda2
is our main linux partition. Technically, we wont be using it as a partition, rather like another disk/volume.
# cryptsetup -s 256 -y luksFormat /dev/sda2
Type YES
, in all caps and give it a password. Do not forget this password!
Now, on to partioning our disk. Here, we're going to give 2GB
to swap
and the rest to the root
partition.
# cryptsetup luksOpen /dev/sda2 sda2crypt
Input the password that we previously set. This will make our encrypted disk available on /dev/mapper/sda2crypt
.
Now onto creating volumes.
# pvcreate /dev/mapper/sda2crypt # vgcreate lc230vg /dev/mapper/sda2crypt # lvcreate -L 2G -n swap lc230vg # lvcreate -l 100%FREE -n root lc230vg
pvcreate
will initialize a volume to be used by lvmvgcreate
is used to make volume groupslvcreate
is used to make a logical volume. -L
is used with a particular size and -l
(lowercase L) is used to extent a percent of the volume.lc230
is the hostname that I'm planning to give to my machine. You dont have to name your volume groups with this convention, but its something that I picked up from debian's installer.Now we let lvm know about our volumes
# vgscan --mknodes # vgchange -ay
Setup our swap volume with
# mkswap /dev/lc230vg/swap
Run setup
from the shell and we'd see this
Please take a moment to read how to navigate the program.
Choose ADDSWAP
and then let the installer find our swap partition. In our case, /dev/lc230vg/swap
. Ignore the screenshot stating t430vg
.
It'll then ask us if we want it to scan the partition for bad blocks. Feel free to say YES
After thats done, we'll see our swap space configured.
Select /dev/lc230vg/root
as the partition we want to use for root (/)
Choose ext4
as the filesystem.
Select /dev/sda1
as the next partition we want to use
Format with ext4
and set its mount point as /boot
Choose done adding partitions, continue with setup
in the next step
And now all our partitions are ready!
Please take a moment to read the screen.
I usually skip KDE
. If you're a KDE fan, have it selected. Hit Enter
and you'd be asked to choose a prompting mode. Choose either terse
or full
here and start the package sets installation.
Skip making a USB boot stick
Now we are asked to install LILO, the linux loader. This is the boot-loader that slackware uses instead of GRUB by default.
Choose expert
mode.
We'd see the LILO installation screen.
Choose Begin
You can enter nothing in the optional lilo append parameters screen
For the framebuffer console config, I usually leave it at standard
Set the lilo target to MBR
Check if lilo found our disk. /dev/sda
in our case.
Choose lilo timeout.
Choose to show a boot screen logo. Highly recommended!
We'll be then brought back to the initial LILO installation screen.
Choose Linux - Add a linux partition to the LILO config
and then choose /dev/lc230vg/root
Set a name like Linux
for the entry
After that LILO setup should be done.
Now, it'll ask us if we want GPM. This would allow us to use a mouse cursor from the tty.
The next step is configuring our network
Choose YES
Set a hostname(lc230
) and then a domain name(localdomain
would do fine.)
Next we are asked to configure VLAN. Choose NO
Then select NetworkManager
to manage our network.
and voila we have our network configured. Choose YES
to have NetworkManager manage our network by default in the next screen.
I usually unselect sshd
here as this is a desktop and not a server.
Set it to your local timezone
Now we are asked to select an editor. Go with the default nvi
I choose fluxbox here. Feel free to choose whatever you like. If you have KDE installed, that should come up here.
Set one up in this screen
We'll be brought back to the first screen of setup
. Choose Exit
and then drop to a Shell
Now we're going to go back into our installation.
# chroot /mnt
We need to generate a generic kernel. To do that run,
# /usr/share/mkinitrd/mkinitrd_command_generator.sh
This will return us a command that we can run to generate the GENERIC kernel specific to our machine. For me it was something like,
# mkinitrd -c -k 5.15.19 -f ext4 -r /dev/lc23vg/root \ -m jbd2:mbcache:crc32c_intel:crc32c_generic:ext4 \ -C /dev/sda2 -L -u -o /boot/initrd.gz \ -h /dev/lc230vg/swap
The -h /dev/lc230vg/swap
notes the swap partition to enable hiberation.
The format is usually something like this
mkinitrd -c -k *insert kernel number* -m *insert ROOT file system type here* -f *insert root file system type here* -r /dev/cryptvg/root -C /dev/sdx2 -h /dev/cryptvg/swap -L
Edit lilo's config to make it use this new generic kernel
# vim /etc/lilo.conf
Edit the corresponding parts to look like this
image = /boot/vmlinuz-generic-5.15.19 initrd = /boot/initrd.gz root = /dev/lc230vg/root label = Linux read-only # Partitions should be mounted read-only for checking
Above that, there's an "append" line. Edit it to look something like this,
append = " resume=/dev/lc230vg/swap"
Now we update lilo with
# lilo
Ctrl + D out of our chroot shell and then reboot
We'd be greeted by lilo
Hit Enter.
After our boot process starts we are asked to unlock our disk. Enter the password that we chose for encrypting our disk.
And then we are put at the login prompt. Enter root
and the password we chose for root.
Type startx
and behold
In the next post we'll look into post installation steps for our brand new slackware install.
README_*.txt
files.Happy Slacking & have a great day!
Isn't it amazing to get your credits in an opensource app?
Dialect is an open-source desktop translation app for linux systems, specifically designed for GNOME desktop. I translated it into Hindi so that anyone using their desktop in Hindi may be able to see Hindi translations and the app doesnt look like an odd English app in hindi environment.
Open-source is not only about code, it's about multi-faceted work loads spread across community , to which everyone contributes in their own time.
and this includes translation as well. We live in a globalised world, so people across different nations, speaking different languages use computers, so internationalization (i18n) and localisation (l10n) of apps is important.
I belive having access to computers is a basic right for everyone, and whatever I can do for making computers more accessible is worth it.
Thus, I am part of hindi translation team at l10n.gnome.org (and I have translated stuff like gnome-clocks, gnome-weather, gnome-bluetooth, gnome-characters). The official website of our team is indlinux.org.
23 is two years older than 21, when you’re legally allowed to marry in India and one year older than 22, when I came to the realization that I old now, not old age wise but old in some mental sense to me. The age number tells that I’ve become an adult, but inside, it doesn’t feel like it. I still don’t feel like an adult. When I’m at home, mum does my packing, dad still do governmental and other stuff. Frankly, I don’t feel like growing now.
As usual, life continued after 22nd birthday. Professionally, I’m still with the same organization and naturally become more comfortable and confident with my job. It has given me the freedom to not think twice on buying or eating out as I have become independent, not accountable to anybody for my spends.
Once again, months and year seem to have intermingled. The year seem to be have flown by. Jotting down the months and trying to remember happening in them; couldn’t come up with interesting happening in the year. Either my memory is not that great or things were really mild these past months.
On my blog, writings have grown to focus more on my life and experiences now. Earlier, I was exploring a lot of technologies and stuff, but recently I find my writings to lean towards my feelings and thoughts. These posts usually start on Notally, my note-taking app on the phone, copied over to Etherpad with random bits and pieces of first thoughts. I usually jot down pointers I want to think/write about and then elaborate on them.
This year also marked less time dedicated to exploring hosting new applications in server space, though have started hosting my own Mastodon server at masto.sahilister.in which has me and Ravi as the users. I also took a more active role in maintaining diasp.in with Raju Dev through co-sponsoring it, though don’t have any concrete will to keep it running due to low to no participation from the community. Next, working on pulling up a Free Software mirror hosted at mirrors.sahilister.in. At the moment, it houses official mirrors for termux, NomadBSD and OSMC seeing more than 100k requests/day with average network usage of ~45GB/day. Work is in progress to become a Blender and Trisquel mirror, though that has already taken a long time, and I’m not seeing a completion to them in the distant future due to long back and forth with those projects.
Coming to our operating system of choice - Debian, I missed the opportunity to attend DebConf22 Kosovo. It was a wonderful opportunity to interact with the larger Debian community with whom I have been working, having fun, for the past two years. In person MiniDebConf Palakkad, Kerala is planned for November. That should give ample time to visit Kerala and interact with the folks. On the sideline, preparations for DebConf23 Kochi are also in full swing.
A highlight for this year was contributing to OpenStreetMap (OSM). Finding random usage of OSM data ranging from online bus tracking website to live booking screen in my office has intrigued me. Even big tech players like Snap Maps and Instagram use it in their applications. Now my phone contains five different OSM apps for various navigation and contributing purposes. Going on a new road always leads to bunch local survey and POI addition, note-taking and GPS track recording for editing the map later on laptop. It has helped me be geographically aware of places near me, like knowing about the weird naming of villages near my native place which have different naming on paper and on signboards. It has also made my conversation with my family more interesting, as I now come up with questions about nearby places, and they have to brainstorm and discuss giving me a suitable answer together. This also leads to many visits to unknown roads as well, which is always exciting.
Lately, life gave a few setbacks (and they seem to keep coming more these days), which I’m trying to see as challenges and life lessons to be learned. This year, I want to explore the mountains, go on treks, be in nature and feel the silence. I want to be in my own and company more and want to enjoy it too. Also want to remove desire for external validation on my actions as no one has seen the world through my eyes and none desire to do accomplish the same things as I do.
Lastly, I feel blessed and grateful for having my parents, sister and other people in my life who have always been supporting me in navigating life. They have eased many of my transitions and I didn’t have to face many troubles because of them.
PS - Read last year’s birthday post here.
PPS - Team change notified at work, so back to hustle.
Disclaimer: This post is not in any way sponsored by Huion or anyone related to them. All the things stated here are my opinions and your mileage may vary. I do not guarantee it will work for you the same way it worked flawlessly for me.
Recently I have been visiting my village house a lot and I have been trying to set up an alternate workstation there so that I don’t need to carry the laptop and tablet with me all the time. I planned to buy a Wacom but as a backup tablet, it is costly. It costs around ₹28,000 ($345) for a medium-sized Intuos Wacom. So I was searching for an alternative brand. I knew about Huion and XP-pen but the biggest requirement for me is that they should work on Linux without making me climb Everest.
I was happy to see a user of Huion reported that one model (H950p) works nicely with Linux on krita-artists.org. I went to the Huion website and found that there is a slightly newer model of this tablet with a USB-C option. It is the H610X model, this model has all the niceties of H950p plus it also works with android phones and tablets, a good travel companion. What’s more incredible is that this tablet costs 1/7th of the price of a Wacom, This tablet costs around ₹4000 ($50) here.
I was slightly sceptical about buying the new model since Linux support would be somewhat dicey for new models. Nevertheless, I saw that the product page on the Huion website listed Linux as a supported platform. They also have an official Linux driver package for this tablet. So I purchased the H610X tablet.
The package was simple and to the point. No wastage of extra wrapping and plastic etc. The box had
The tablet felt very lightweight compared to my Wacom Intuos pro medium. The design is simple and identical to the Wacom, even the button placements and logo placements are in the same place. the material is a bit cheaper than Wacom. The pen too felt lighter and I think this is an advantage or disadvantage depending on the person. I felt it was good and intuitive to write and draw. The pen doesn’t have an eraser end, which I never used anyway. Tilt support is good and works out of the box.
The tablet was working right away out of the box, most tablets these days are plug and play. it is the configuration that falls short on Linux if the tablet is new or not supported. We will further look into this aspect later. The surface of the tablet was not smooth like the Wacom, something which I am a bit worried about. It has a slight roughness to it and this may or may not result in nib wear like how the new Wacom Intuos models’ nibs wear out due to rough texture.
The nib has a very slight barely noticeable spring mechanism which makes it feel smoother and less fatigue-inducing. Some heavy pressing users might think that they are not exerting enough pressure . I don’t know if it has spring inside or not.
The pen holder too is very lightweight and it can be easily toppled like every other pen holder unless the pen is placed horizontally. there is a slightly larger chance of this pen holder breaking by damage than the Wacom one. The Wacom pen holder is heavier with a metal base and firmly stays on the desk.
The USB port on the tablet is awkwardly placed on the left-hand side top, this makes it easier to bend and wear out if you keep your keyboard above the tablet like me. The cable is also not stiff like the Wacom offers. The cable has a Velcro strap so that you can folder the extra cable and tie it with other cables on your desk, a nice touch from Huion.
The buttons feel better than Wacom and are not too stiff. They however lack the pronounced tactile bumps or marks for accessibility and blind pressing. There is only one very small circular bump in the middle key. The tactile bumps on the Wacom pad help in identifying the key you are pressing without looking at it. However here the number of keys is less and there is a slightly raised bar between the keys so that you can guide your finger to get to the middle or last key, so it might not be such a big problem.
Pen pressure works like a charm out of the box, no need to install Wacom or any drivers. Just plug and play. Tilt too works out of the box. The driver is provided by the Linux kernel and the device is managed by libinput. On android too the tablet works out of the box. I have yet to draw and check pen pressure and other stuff on it though.
However, if you want to use the keys and configure the tablet correctly you need some fiddling to do. As this is a new model, there is no tablet definition file in the libwacom package for this. There is a tablet definition file for the H950p, which strangely has the same USB ID (006d) as this tablet. So in the KDE settings section, this tablet is wrongly reported as H950P. This also leads to a mismatch of tablet areas. The reported tablet surface area shows a square of 32767 x 32767 so when we try to map the area proportional to the monitor resolution it renders half of the tablet out of range.
The KDE settings version on Fedora 36 Linux that I am using has a bug where it doesn’t remember any settings changed. So even if the buttons are detected correctly here, since they are the same as the h950p tablet the settings don’t persist.
This bug is solved in the future version of the settings and hopefully, in the next update, the settings will have no issues.
For the tablet to get correctly recognized we would have to submit a tablet definition file to the libwacom project. Maybe another weekend project for me, of course when I do get a free weekend :).
The Wayland side should be plug and play and I see that KDE has been adding a new tablet setting GUI so that should be covered. However, when I wanted to test the development version of this, I couldn’t log in to the live USB of KDE neon with Wayland.
I tried the Digimend drivers but they did not give me any advantage over what I already have with defaults. it too didn’t recognize the tablet and it wrongly reported the pad as the stylus and the pen as the eraser.
The main concern is the aspect ratio of the tablet surface which is not 16:9 like my monitor. So drawing a perfect circle gives me an egg shape. Quick tracing of the round jar lid or coin gives this result, you can see the coin is a squished circle here.
I tried the official driver given on the huion website. The driver program was an archive and had two bash scripts to install as well as uninstall the utility. Although the driver’s front end was made in Qt and was under LGPL license there was one binary called huioncore. I suspect this is a proprietary program. Nevertheless, I gave it a try to check how the official support is. The utility was only officially supported on Ubuntu but I got it to install on Fedora without any issue. The utility autostarts with every boot and sits in your system tray with a blue pencil icon. The interface is well-designed and easy to use.
However, there is some drawback to this. The utility needs to be run all the time in the background for the tablet configuration to work. If you close the utility completely all the config gets unloaded. Some of the configurations were not persisting. Huion needs to test this more and try to release their driver as Free and open source software so that people can improve it and help them too. Their hardware is on par with the user expectation but the software side is not that good on the Linux side. Although since Huion is one of the few hardware companies openly advertising Linux support on its website we can give them a bit of time to understand and catch up to the community. You can work with this utility if you want. I won’t be using this since it is proprietary and it doesn’t give me anything that the free software counterpart can’t give. And I am not forced to use it, unlike some graphic card vendor scenarios.
Since the tablet model isn’t properly recognized but works perfectly, configuring this will be a bit easy. First I tried to use the default libinput drivers and did not use the Wacom drivers. This mode gives you the tablet with working pen pressure and also a mapping facility via xinput. Just run the following command to trim the surface area of the tablet to match that of the monitor.
xinput set-prop "HUION Huion Tablet_H610X Pen (0)" "libinput Tablet Tool Area Ratio" 16, 9
For now if you do not want to map the button and just work with the tablet this is good to go. To map the button I opted to fallback to the wacom driver method. Because I did not have the patience to find the command and information regarding libinput and it is easier with wacom driver. This tablet works with the wacom drivers too.
So install the xf86-input-wacom package from the repository and configure the tablet according to your needs. Here is the step by step that I did.
Step 1 – Install the wacom driver from your Linux distribution’s repository. For me the command on fedora was this
sudo dnf install xorg-x11-drv-wacom
Step 2 – Make a Xorg configuration file which tells our system to use wacom driver for this tablet. You would need administrator privilege for this. Open the file in the text editor with the following command.
sudo nano /etc/X11/xorg.conf.d/50-huion.conf
And paste in the following contents
# huion tablet and buttons
Section "InputClass"
Identifier "Huion tablet class"
MatchProduct "HUION"
MatchIsTablet "on"
MatchDevicePath "/dev/input/event*"
Driver "wacom"
EndSection
This tells our system to find any hardware with the product name “HUION” and check if it is a tablet type and use the driver specified. Now if you reboot, the Linux command line utility for configuring the tablets called xsetwacom should show your tablet in the list of devices
Now we can use xsetwacom command to configure the tablet are and buttons too. The button numbering on this tablet is similar to the Wacom Intuos, probably because we are using the wacom driver. Here is the button layout with their numbers.
You can use the following command to map a key or sequence of keys to a button
xsetwacom set "HUION Huion Tablet_H610X Pad pad" Button 11 "key ctrl s"
This will map the Button number 11 to Ctrl S which will trigger the save.
Step 3 – Instead of running each command individually I have made a script with all the sequence of command which will be run when required. the script is below
#!/bin/bash
#get device id without hardcoding it
list=$(xsetwacom list devices)
pad=$(echo "${list}" | awk '/Pad pad/{print $(NF-2)}')
stylus=$(echo "${list}" | awk '/stylus/{print $(NF-2)}')
if [ -z "${pad}" ]; then
exit 0
fi
# configure the buttons on ${stylus} with your xsetwacom commands...
xsetwacom set "${stylus}" Area 0 0 50800 28575
xsetwacom set "${stylus}" RawSample 1
xsetwacom set "${stylus}" Suppress 0
xsetwacom set "${pad}" Button 1 "key m"
xsetwacom set "${pad}" Button 2 "key a"
xsetwacom set "${pad}" Button 3 "key ctrl"
xsetwacom set "${pad}" Button 8 "key b"
xsetwacom set "${pad}" Button 9 "key +ctrl +shift z -ctrl -shift"
xsetwacom set "${pad}" Button 10 "key +ctrl z -ctrl"
xsetwacom set "${pad}" Button 11 "key ctrl s"
xsetwacom set "${pad}" Button 12 "key +v"
#optional send notification after configuring
notify-send -a 'config' -h "string:desktop-entry:org.kde.konsole" -u normal 'huion config done' --icon=face-smile
This script can be run on startup or run with systemd triggered by udev like mentioned in this arch wiki article . But I found that while the script is run on startup and the configuration is done correctly, removing and re-plugging the tablet will not run the script again. I tried to search a solution for this and found this wonderful article by Brian Lester. They use python to monitor changes in udev and run the script when necessary. You can find the python script they use in the linked article. I changed the path in it accordingly and used it. Here is my python script copied from Brian’s script and shared with permission.
This require pyudev, so please install it from your distributions repository or from pip. Fedora has this by default so i didn’t have to do install anything.
#!/usr/bin/python
# script from https://blester125.com/blog/wacom.html
# author - Brian Lester
import time
import argparse
import subprocess
import pyudev
def main():
parser = argparse.ArgumentParser(description="Listen to udev for the huion tablet.")
parser.add_argument('--vendor_id', '--vendor-id', default=b"256c", type=lambda x: x.encode("utf-8"))
parser.add_argument('--product_id', '--product-id', default=b"006d", type=lambda x: x.encode("utf-8"))
args = parser.parse_args()
print("Initializing udev listener")
context = pyudev.Context()
print("initializing udev monitor")
monitor = pyudev.Monitor.from_netlink(context)
monitor.filter_by(subsystem="usb")
print("starting udev monitor")
monitor.start()
print("Running huion setup")
subprocess.call("/home/raghu/.local/bin/huion.sh")
print("Setup huion")
for device in iter(monitor.poll, None):
print(f"action on device {device}")
vendor_id = device.attributes.get('idVendor')
print(f"device vendor id: {vendor_id}")
product_id = device.attributes.get('idProduct')
print(f"device product id: {product_id}")
if vendor_id == args.vendor_id and product_id == args.product_id:
print("Device is my huion")
time.sleep(2)
print("Running huion config setup")
subprocess.call("/home/raghu/.local/bin/huion.sh")
print("Setup huion")
if __name__ == "__main__":
main()
And then I have a systemd service file like Brian which starts the python program to watch udev. I place this service file in ~/.config/systemd/user
as huion.service and enable it as user service.
[Unit]
Description=Configure Huion Service
After=graphical-session.target
PartOf=graphical-session.target
[Service]
ExecStart=/home/raghu/.local/bin/huion/huion.py
Restart=on-failure
RestartSec=10
StartLimitBurst=10
Type=simple
Environment=PYTHONUNBUFFERED=1
[Install]
WantedBy=graphical-session.target
systemctl --user enable huion.service
And there we are with a configured tablet. And now our coin trace test works as expected.
Huion H610X is worth every penny and a good tablet which is usable on Linux without much hassle. The tablet works excellently and it is a boon for budding artists who are low on cash. In the coming months, I am optimistic that the initial hiccup of the tablet not being recognized in the KDE setting will be sorted out. I give this tablet a 7 out of 10. If you have any questions or want me to test anything with this tablet let me know in the comments below.
(Sorry if you have read this already, due to a tag mistake, my draft copy got published)
I recently bought a refurbished thinkpad x260. If you have read my post of my previous laptop. Its a big jump from 1 gen Intel processors to 6th Gen.
I really love my old laptop. In fact I am actually writing this from my old laptop. Its OK for my day to day task. But its not convenient to carry around when I am travelling.
I was thinking a lot of buying a new machine. But always end up not because of how much minimal hardware ports and how hard for normal users to open it for usual maintenance. But during Debconf22 someone made fun of my laptop, he mentioned its more of a printer than a laptop. And there I decided get a new machine.
Thinkpads always fascinated me. Look sturdy, more I/O ports, nice keyboard and easy to open up for usual maintenance. But I never arrived on which model should I pick. Lenovo has only limited number of models available in India and with right specs, its way out of budget for me. I was planned for taking a liberated.computer. But should I go again to old model. Then I decided to get something little more recent. I know Marhaba computers from whom I already bought a thinkpad for my brother.
Deciding to pick X260 took around an evening. I have bugged a lot of my friends especially Kiran. I might’ve have made him mad because of how much back and forth I asked questions to him regarding this. Finally we arrived at X260.
I chose the minimal storage and memory as I will be changing that anyway. So mine had 320GB SATA HDD, DDR4 4 GB memory, Intel i5 6000 , 6th gen processor. I have a 120GB SSD lying around. I quickly swapped it with stock HDD and installed Debian. More memory will be loaded soon.
Resolution is 1920*1080 and scaling seems very small and it started to give me a hard time for my eye. So I decided to adjust it.
abhijith@Adebian cat .Xresources
Xft.dpi: 120%
I was using libinput drivers for the trackpad. Turns out it was horrible for my model. So I decided to move back to synaptics.
So far I haven’t decided on full transition. So its my travel laptop for mail, IRC, RSS, matrix reader/client.
These are my experiences specific to Delhi Metro and Rapid Metro Gurugram, which I frequent almost daily. Metros in other cities may or may not share these traits.
Metro seems to carry almost everyone. I haven’t seen so many people in one place together and such diversity between them. After reading Metronama: Scenes From The Delhi Metro by Rashmi Sadana, I came to the realization that travelling by Metro isn’t economical for economically weaker section of the society. Barring these strata of people, almost everyone is represented in these Metro coaches. Metro really blurs the line of societal differences; I can see, a daily wageworker seating next to an IT professional. Metro brings them together like no other public space could.
Metro is truly made for city use and quick movement. Large windows for light to seep in and eight doors in each coach to quickly move in and out. Metro coaches in rush hours are no fun. Shoulder to shoulder pushing, and difficulty getting in and out is a norm during office hours. In lean hours, one can comfortably sit back and observe (no fun observing others when your legs are aching due to standing and constant motion). Ladies, depressed working professionals, hopping kids, friends enjoying their going places chit-chat amongst others are common fixtures in these coaches. Loud gossips and phone calls can be heard, though most folks are down in their phones, head bowed and earphones plugged enjoying a downloaded OTT series.
Even though the public announcement and notices in Delhi Metro says luggage heavier than 25kgs and exceeding 80x50x30 cm dimensions are not allowed but is rarely enforced where we move from railway station to Metro station and vice-versa. Metro serves as our last mile to reach a secondary home in the city.
Metro stations are architectural marvels. They’re well planned, functional public spaces made to onboard people from a chaotic city like Delhi. Each above ground station are build differently according to situation. As Metro traverse in different places - different landscape. The Metro zooms over bridges; sometimes near multistory houses where a curious passenger can peer into a persons households. Metro windows are screens with changing visuals, forests if you come to Gurugram side on yellow line, parks and city landscape on violet line towards Faridabad and others. Underground stations are sad with no visuals other than narrow black walls. Metro had a forward-looking approach, the leadership also seems to have emphasized it through adopting accessibility everywhere. Accessibility seems like an afterthought in other mode of public transport like bus and rail. Lifts and ramps are first place citizens in stations, with Metro staff placed around the station to help people around. With dedicated gates for entry and exit, flow of people is highly channeled through Metro stations. Station exit barriers are fun too. I usually look at the amount deduced from the person in front’s card and guess how far the person has travelled to reach this station. Metro stations stand in contrast in terms of cleanliness from other public spaces in India. There are neat and clean, and feels more like an airport than their rail cousin. The cleanliness culture of Metro really seems to have seeped into its passengers as well, who for the most part don’t litter any on/in most station. Another fascinating aspects are Metro lifts, which house a soul of their own. Moving up or down a life, one stand close and times, too close to hear people’s personal call; their social media timelines and their odor. Metro stations have also provided a safe space to couples who can be seen huddled together, sitting and chit-chatting with no worries or hugging as they part.
Metro is a true people mover and has increased accessibility to and fro from far flung areas. One don’t need to face long traffic jams. Earlier people had to travel 2-3 hours to places, so plans were avoided, now in 25 mins, so people can cover the same distance. It has made places accessible, and people have better opportunities professionally, educationally and socially. Now they can reach right into city heart from hinterland.
Overall, Metro is what keeps the city like Delhi moving, else everyone would just have been sitting in traffic for long hours and getting frustrated with it.
PS - I’ll highly recommend the book Metronama: Scenes From The Delhi Metro by Rashmi Sadana for people who want to understand how Delhi Metro came to be and it’s affects on the society. Rashmi being Associate professor of Anthropology has captured the intersection of this massive public infrastructure project and people who use it daily using first hand instances conversations with them. It really shows how Metro has transformed life of Delhiites for the better.
A casual talk regarding OpenStreetMap (OSM) with my roommate lead to the discovery that Snapchat uses OpenStreetMap as basemap for their map functionality. It was felt great learning about this development.
I was mapping actively since the time I moved to Gurugram with my colleague/roommate. My roommate use to see me doing edits and making roads in OSM. He was curious to know who even uses those maps, and during one of these discussions, I remembered vaguely reading somewhere about Snapchat and OpenStreetMap. Personally, I don’t use Snapchat, so asked him to open the map feature to confirm. Clicking on the small info icon on the bottom, confirmed that Snapchat uses OpenStreetMap data through Mapbox (seems to be using a secondary source for POI data). That got his interest piqued as well. Snap Map (as they’re called) can be viewed online here map.snapchat.com as well.
After Snap Maps, I also discovered that the big live bookings screen at my office (an online travel aggregator), also uses OpenStreetMap data. Been passing this screen everyday and just noticed the small OpenStreetMap credits at the bottom. Next, discovered OpenStreetMap being used in my live bus tracking link through MapTiler (though the data seemed to be old and didn’t contain my POI and roads edits in my nearby areas). Zomato also seems to used OpenStreetMap for restaurant location screenshots on their website.
These usages of OSM data in wild gives me the motivation to add/edit even more to improve maps for everyone, including Snapchatters and Pokémon Go players everywhere. OpenStreetMap mapping has helped me know places better. The history/road types/classifications and figuring out the appropriate tags is always an experience. Seeing roads and POIs fill up empty stretches of maps, always look good. OpenStreetMap data has given rise to so wonderful uses which wouldn’t have been possible with non-free licensed geographical data.
PS - If you’re new to OpenStreetMap and want to contribute, start with StreetComplete app. It gamifies the whole data addition part by asking questions on missing nodes in your local area.
PSS - My inbox is always open for chats regarding mapping and OSM.
I was introduced to Linux Device Drivers
Book by Alessandro Rubini, Greg Kroah-Hartman, and Jonathan Corbet in a video by LiveOverflow from 2020.
But I never got around to reading it.
This post is just me jotting down my experiences going through this book.
This book is freely available from lwn.net under the Creative Commons “Attribution-ShareAlike” license, Version 2.0. LiveOverflow goes through a bit of history in his video. I think I don’t need to reiterate it. Oh, and BTW, sub to lwn.net if you can!
I built the kernel with the existing config from my Arch Linux install a couple of years back. And custom kernels for building TWRP and some test ROMs for my previous android devices. But that was it. The kernel source is too scary for me :sweat_smile:
So, in other words, this is another attempt to try and understand the kernel source, or at least some parts of it. Heard, device drivers are a good starting point? Well, the kernel can be thought of as a driver.
Chapter 1 is a lighter version of my Engineering Degree’s CS204 syllabus but a bit skewed towards Linux Kernel.
Also, since LDD3 was created with Linux 2.6 in mind, some coding examples will require some modifications. There is this repo ldd3 which has examples updated to work in recent kernels.
So, the real meat starts from chapter 2 which contains a hello world kernel module!
I jumped on Vagrant to set up a temporary, quick VM. Just followed the
Vagrant ArchWiki and used libvirt
as the virtualization provider. Also had to install nfs-utils
along with the libvirt
dependencies.
Ran vagrant init
and edited the Vagrantfile to include Debian Bullseye box debian/bullseye64
.
A sample config might look something like this.
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.provider "libvirt" do |v|
# Adjust these as required
v.memory = 8162
v.cpus = 4
end
end
Now, just do vagrant up
and if it’s successful, you can do vagrant ssh
to get into the VM.
You also need to install build-essentials and appropriate kernel headers for your host(VM)
-> apt install build-essential linux-headers-`uname -r`
.
Copied over the hello.c file from here to /vagrant/src
directory on the guest (or to ./src
directory on the host)
and added the makefile. Make sure to remove
all the modules that we are not building at this step -> Only keep hello.o
in obj-m
.
While copying, I’ve also noticed this placement of braces and spaces. It’s not something that I’m used to. So it might be worth looking at the very opinionated coding style for the kernel.
Now if you run make
, it should hopefully build our very very complex kernel modules without errors :)
To insert our module to the kernel, we can use insmod; sudo insmod hello.ko
. Note the .ko extension indicating that it’s a kernel object.
At this point we can inspect the kernel logs and we should see the function we defined in the
module_init
macro getting executed.
sudo tail /var/log/kern.log
->
Aug 24 09:04:43 bullseye kernel: [ 1143.598608] hello: loading out-of-tree module taints kernel.
Aug 24 09:04:43 bullseye kernel: [ 1143.599432] hello: module verification failed: signature and/or required key missing - tainting kernel
Aug 24 09:04:43 bullseye kernel: [ 1143.600865] Hello, world
How exciting is this! Something we wrote is getting executed in the kernel space, not in userspace!!
To remove this module, we can run sudo rmmod hello
and see our poor module realizing the realities of the world :P
Aug 24 09:26:00 bullseye kernel: [ 2421.287445] Goodbye, cruel world
Just to spice things a bit I decided to calculate the factorial of 10 from the kernel space :laughing:
#include <linux/init.h>
#include <linux/module.h>
MODULE_LICENSE("Dual BSD/GPL");
static int hello_init(void)
{
printk(KERN_ALERT "Hello, world\n");
return 0;
}
static void hello_exit(void)
{
long fact = 1;
int i = 1;
while(i<=10)
{
fact*=i;
i++;
}
printk(KERN_INFO "Factorial: %ld\n", fact);
}
module_init(hello_init);
module_exit(hello_exit);
I’m pretty sure that I have broken 10 different kernel coding guidelines with these ^ lines. And is probably against the conventions, but, it’s fun!
Aug 28 10:04:26 bullseye kernel: [ 4727.115511] Hello, world
Aug 28 10:04:30 bullseye kernel: [ 4730.577740] Factorial: 3628800
I’ll be back with moarr sweet kernel juice.
07 Aug 2022
A few months back I went back to slackware and one of the things that I missed was docker. slackbuilds at that time was still on 14.2. Luckily, docker provides statically compiled binaries.
Head on over to docker's release page and choose the version that we want. At the time of writing docker-20.10.9.tgz
was the latest.
$ cd ~/Downloads $ wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz
I have a ~/Programs
folder, where I put things that I don't install with a package manager. So in there it went.
$ tar xvf docker-20.10.9.tgz $ mv docker ~/Programs
The docker
folder has the binaries we need.
$ cd ~/Programs/docker $ ls ./ containerd* containerd-shim-runc-v2* docker* docker-init* runc* ../ containerd-shim* ctr* dockerd* docker-proxy*
Now we need to add these binaries to our shell's path. I'm using zsh, so my ~/.zshrc
has something like
export PATH="${PATH}:$HOME/Programs/docker"
You can chuck that into your ~/.bashrc
, if you're using bash.
The docker daemon requires root privileges to run. So we need to invoke it with
$ sudo dockerd
But this will always ask for our password each time. A tiny edit to the sudoers
file can sort that out
$ sudo visudo
That will open up the sudoers file in vi
or nano
based on root's $EDITOR
variable. Go to the end of the file and add
john ALL=NOPASSWD: /home/john/Programs/docker/dockerd
Substitute john
with your username. Now running sudo dockerd
from our terminal would launch the docker daemon. If we accidently close that terminal, dockerd
would also die. Enter tmux
.
Create a shell script called start-tmux
in your ~/bin
folder. I'm assuming ~/bin
is in your $PATH
. Put in this
#/bin/sh tmux new-session -d -s docker 'sudo dockerd'
Make it executable with
$ chmod +x ~/bin/start-docker
Now we can run it with start-docker
from our terminal. To attach to the docker session in tmux, run
$ tmux a -t docker
Here's a quick cheat-sheet on tmux if you're new to it. It is a brilliant tool to familiarize, if you're spending a lot of time on the terminal.
There's one more step left if our user wants to run the docker
cli.
$ sudo groupadd docker $ sudo usermod -aG docker $USER
This creates a group called docker
and add then we add our user to it. Without this our user gets a permission denied error.
To test if everything went well, run
$ docker run hello-world
And we should see something like this.
Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
This is how I run docker and it may not be the best for most people. For example,
dockerd
autostart when I boot up my machine. I actually like this, as it keeps my startup time fairly quick.These could be remedied with shell scripts, but hey I'm running slackware :)
Happy Hacking & have a great day!