#scripting

danie10@squeet.me

How to use the grep command in Linux for searching for, and inside, files

Title says How to use te grep command in Linux/Unix, on a blue background, and below it is an arrow pointing to the right, with a pengion sitting on top of a black command line interface.
In the vast realm of Linux, an open-source operating system, the grep command holds a significant place. An acronym for ‘Global Regular Expression Print’, the grep command is a widely-used Linux tool that gives users the power to search through text or output based on specific patterns. This command line utility is indispensable for efficient file and data management, serving as a cornerstone in Linux operations.

The grep command’s usefulness is in its versatility and power. Whether you’re debugging a problem by searching through logs, searching through code for a specific function, or even looking for files with a specific keyword, the grep command is an indispensable tool in a Linux user’s toolkit. Its ability to filter output makes it an ideal command for piping with other commands to refine output further.

This very week I used the grep command to search through over 200 desktop files to firstly check they each had a line inside starting with ‘Exec=’ and then I got that line extracted and printed out into an output file that I could examine (instead of opening each every one of the 200+ files).

And liked I used the grep command, it is often used in conjunction with the pipe symbol ‘|’ to further process what grep has extracted, e.g. extract a sub-string or perform some other function.

In fact, the bash shell itself can really do a lot, from prompting for inputs, or reading variables passed to it at runtime, do/while loops, if/then statements, and a lot more. If you want to automate something, you can frequently just use a bash script file in place of a more complex language. But you won’t know unless you just read up a bit on a few of the different commands. If you get stuck, ask Google Bard or similar AI for some help, and it will even spit out some sample code you can use.

See https://www.linuxcapable.com/grep-command-in-linux-with-examples/
#Blog, #bash, #grep, #linux, #scripting, #technology

zebulon_1er@diasp.org

#fr #aide-demandée #gimp #scripting

Alors, voilà : j'ai plus de 300 images à modifier avec GIMP couleurs → auto → balance des blancs.

Et, bien évidemment, pas de bol ( - "cimetière indien" es-tu là ? - oui :P - grrr :S ), ce filtre n'est pas disponible dans filtres → filtrer tous les calques, ce qui m'aurait permis, sinon, de charger ces images ensemble sous forme de calques, filtrer tous ces calques en une fois et sortir le résultat sous forme de GIF animé. Le tout en 1 passe.

Je dois donc créer un script bash qui ferait le boulot pour chaque image : charger l'image crée par #povray (.exr), la filtrer avec couleurs → auto → balance des blancs, puis sauver l'image modifiée, sous le même nom mais vers un autre dossier pour ne pas écraser l'image source.
Ou, mieux, charger l'image (comme nouveau calque), la filtrer, puis passer à l'image suivante (comme nouveau calque par dessus), etc. Et enfin, quitte à finaliser ça à la main, sauver sous forme de GIF animé.

Je ne trouve pas de documentation détaillée pour savoir comment appeler ce filtre dans une commande (script-fu ? en notation polonaise inversée)... appel qui serait passé à GIMP via un script bash.

canoodle@nerdpol.ch

GNU Linux bash - notebook laptop test battery runtime script

how long (many hours) will this notebook-laptop battery last? some sensors/softwares report/calculate things like this... 5days on one charge is a very very optimistic estimate for most intel or amd based notebooks (even for RISC/ARM based notebooks-laptops that would be AWESOME, most[...]

#linux #gnu #gnulinux #opensource #administration #sysops #hardware #battery #test #script #bash #scripting #scripts

Originally posted at: https://dwaves.de/2022/07/08/gnu-linux-bash-notebook-laptop-test-battery-runtime-script/

dredmorbius@diaspora.glasswings.com

My current jq project: create a Diaspora post-abstracter

Given the lack of a search utility on Diaspora*, my evolved strategy has been to create an index or curation of posts, generally with a short summary consisting of the title, a brief summary (usually the first paragraph), the date, and the URL.

I'd like to group these by time segment, say, by month, quarter, or year (probably quarter/year).

And as I'm writing this, I'm thinking that it might be handy to indicate some measure of interactions --- comments, reshares, likes, etc.

My tools for developing this would be my Diaspora* profile data extract, and jq, the JSON query tool.

It's possible to do some basic extraction and conversion pretty easily. Going from there to a more polished output is ... more complicated.


A typical original post might look like this, (excluding the subscribed_pods_uris array):

{
  "entity_type": "status_message",
  "entity_data": {
    "author": "dredmorbius@joindiaspora.com",
    "guid": "cc046b1e71fb043d",
    "created_at": "2012-05-17T19:33:50Z",
    "public": true,
    "text": "Hey everyone, I'm #NewHere. I'm interested in #debian and #linux, among other things. Thanks for the invite, Atanas Entchev!\r\n\r\nYet another G+ refuge.",
    "photos": []
  }
}

Key points here are:

  • entity_type: Values "status_message" or "reshare".
  • author: This is the user_id of the author, yours truly (in this case in my DiasporaCom incarnation).
  • guid: Can be used to construct a URL in the form of https://<hostname>/posts/<guid>
  • created_at: The original posting date, in UTC ("Zulu" time).
  • public: Status, values true, false. Also apparently missing in a significant number of posts.
  • text: The post text itself.

A reshare looks like:

{
  "entity_type": "reshare",
  "entity_data": {
    "author": "dredmorbius@joindiaspora.com",
    "guid": "5bfac2041ff20567",
    "created_at": "2013-12-15T12:45:08Z",
    "root_author": "willhill@joindiaspora.com",
    "root_guid": "53e457fd80e73bca"
  }
}

Again, excluding the .subscribed_pods_uris. In most cases, reshares are of less interest than direc posts.

Interestingly, I've a pretty even split between posts and reshares (52% status_message, that is, post).

My theory in creating an abstract is:

  • Automation is good.
  • It's easier to peel stuff off an automatically-created abstract than to add bits back in manually.
  • The compilation should contain only public posts and exclude reshares.

Issues:

  • It's relatively easy to create a basic extract:
jq '.user.posts[].entity_data | .author, .guid, .created_at, text

Adding in selection and formatting logic gets ... more complicated.

Among other factors, jq is a very quirky language.

Desired Output Format

I would like to produce output which renders something like this for any given posts:


Diaspora Tips: Pods, Hashtags & Following

For the many Google Plus refugees showing up on Diaspora and Pluspora, some pointers: ...

https://diaspora.glasswings.com/posts/a53ac360ae53013611b60218b786018b (2018-10-10 00:45)


What if any options are there for running Federated social networking tools on or through #OpenWRT or related router systems on a single-user or household basis?

I'm trying to coordinate and gather information for #googleplus (and other) users looking to migrate to Fediverse platforms, and I'm aware that OpenWRT, #Turris (I have a #TurrisOmnia), and several other router platforms can run services, mostly #NextCloud that I'm aware. ...

https://diaspora.glasswings.com/posts/91f54380af58013612800218b786018b (2018-10-11 07:52)


The original posts can of course be viewed at the URLs shown.

What this is doing is:

  • Extracting the first line of the post text itself.
  • Stripping all formatting from it.
  • Bolding the result by surrounding it in ** Markdown.
  • Including the second paragraph, terminating it in an elipsis ....
  • Including a generated URL, based on the GUID, and here parked on Glasswings. (I might also create links to Archive.Today and Archive.Org of the original content.)
  • Including the post date, with time in YYYY-MM-DD hh:mm resolution.

Including the month and year where those change might also be useful for creating archives.

Specific questions / challenges:

  • How to conditionally export only public posts.
  • How to conditionally export only status_message (that is, original) posts, rather than reshares.
  • How to create lagged "oldYear" and "oldMonth" variables.
  • How to conditionally output content when computed Month and Year values > oldMonth and oldYear respectively. Goal is to create ## .year and ### .month segments in output.
  • How to output up to two paragraphs, where posts may consist of fewer than two separate text lines, and lines may be separated by multiple or only single linefeeds \r\n.
  • Collect and output hashtags used in the post.
  • Include counts of comments, reshares, likes, etc. I'm not even sure this is included in the JSON output.

There might be more, but that's a good start.

And of course, if I have to invoke other tools for part of the formatting, that's an option, though an all-in-jq solution would be handy.

#jq #json #diaspora #scripting #linux

danie10@squeet.me

Bash Scripting – Functions Explained With Examples

In Bash shell scripting, functions are ways to group the set of instructions together to get a specific outcome. You can think of functions as a mini script. Functions are also called procedures and methods in some programming languages. Functions are a great way to achieve modularity and reusability. And especially so if you define your functions in one script file, and import those into other script files. This is similar to import statements in python, include statements in C, etc.

See Bash Scripting - Functions Explained With Examples - OSTechNix

#technology #linux #bash #functions #scripting

Image/photo

This guide explains what are Bash functions and how to define and call a function in Bash scripts with examples.


https://gadgeteer.co.za/bash-scripting-functions-explained-examples

montag@friendica.xyz

Hallo zusammen,

gibt es hier irgend jemanden der sich noch mit #Paradox und #scripting in #ObjectPAL auskennt? Ich muss gerade mal etwas an einer Anwendung ändern, die ich vor 25 Jahren zusammengebastelt habe und die leider immer noch verwendet wird (nicht von mir). Und ich habe leider keinerlei Handbücher oder ähnliches mehr, kann ja auch keiner ahnen, dass damit solange gearbeitet wird ...

carstenraddatz@pluspora.com

TIL: wenn eine Firma "Skriptsprachen" nennt (in PR, Stellenbeschreibung, o.ä.), dann meint sie nicht bash, sondern eher "obskure Kommandozeile die wir nicht automatisiert haben" von ihrer historischen Hardware.

Oft bedeutet das ganz einfach, dass sie netmiko nicht kennen. Das abstrahiert nämlich ganz hervorragend von den Ideosynkrasien der Hersteller.

#bash #scripting #netmiko #paramiko #python #networking

dredmorbius@joindiaspora.com

desed: a sed debugger

Desed is a command line tool with beautiful TUI that provides users with comfortable interface and practical debugger, used to step through complex sed scripts.

Some of the notable features include:

  • Preview variable values, both of them!
  • See how will a substitute command affect pattern space before it runs
  • Step through sed script - both forward and backwards!
  • Place breakpoints and examine program state
  • Hot reload and see what changes as you edit source code
  • Its name is a palindrome

https://github.com/SoptikHa2/desed/

#sed #linux #scripting #debuggers