#filesystem

anonymiss@despora.de

Easy-to-exploit local privilege escalation vulnerabilities in #Ubuntu #Linux affect 40% of Ubuntu cloud workloads

source: https://www.wiz.io/blog/ubuntu-overlayfs-vulnerability

CVE-2023-2640 and CVE-2023-32629 were found in the #OverlayFS module in Ubuntu, which is a widely used Linux #filesystem that became highly popular with the rise of containers as its features enable the deployment of dynamic filesystems based on pre-built images. OverlayFS serves as an attractive attack surface as it has a history of numerous logical vulnerabilities that were easy to exploit. This makes the new discovered vulnerabilities especially risky given the exploits for the past OverlayFS vulnerabilities work out of the box without any changes.

#security #os #software #update #bug #problem #news #exploit #hack #hacker #server #vulnerability

danie10@squeet.me

Why I’m interested in BTRFS filesystem instead of ext4 on Linux

Bild/Foto
According to the lead author the calling it Butter-FS originated because it comes from CoW which stands for copy-on-write, but many call it Better-FS, and it actually stands for B-tree file system. It’s been around since 2007, so is relatively new versus other file systems, but the feeling seems to be now that it is finally quite stable. The exception being for new features still being developed. I’m a novice still at BTRFS but these are some of my reasons for wanting to fully move across to it.

Potential downsides (yes it is no magic bullet) are that it can be slightly slower due to compression and validations, and this supposedly is especially not best for large databases. The noatime option can be used to disable the Linux ‘access time’ write to every file read. Also it will use a bit more space as it copies updated data to new sectors instead of overwriting data like ext4 and others do. Although there is RAID functionality, BTRFS is not actually doing backups (it mirrors). You still need to backup off-site and to other media. There is no in-built filesystem encryption (it is planned though), but you can use other standards, but these could potentially affect some advantages of BTRFS eg. using raw block devices.

But the advantages may well outweigh the disadvantages:
1. Copy-on-Write: Any existing data being edited or updated, is left untouched on the drive, which means potentially less loss, and way easier and quicker to reliably roll back to a previous state.
2. Snapshots: Manual or automated, are extremely quick as they do not recopy all previously existing data. It is essentially a snapshot of the metadata of the status of files. Where this is done for the boot drive, such snapshots can be configured to appear automatically in the GRUB boot menu, for quickly reverting back to a previous version. No manual booting from LiveCD, chrooting, Clonezilla restores needed. Suse produced an excellent Snapper app for managing snapshots, but Timeshift also supports them.
3. Software RAID: What stands out is that drives need not be matched sizes at all. You can also add to a running system, and just rebalance BTRFS. BTRFS rebuilds involve only the blocks actively used by the file system, so rebuilds are much quicker than most other systems.
4. Self-healing: Checksums for data and metadata, automatic detection of silent data corruptions. Checksums are verified each time a data block is read from disk.
5. Three different compression options: ZLIB, LZO, and ZSTD differ in terms of speed and amount of compression. You can compress only new files, or process the whole file system, or just do specific individual files if you wish. Compression is supported on a per mount basis. If compression makes the file size any bigger than the original, then the Btrfs filesystem will, by default, not compress that file.
6. Utilities: Scrub for validating checksums, defragmenting while subvolumes are mounted. Check (unmounted drives) is similar to fsck. Balancing for adding new drives to a RAID or other changes made to BTRFS.
7. Send/Receive of subvolume changes: Very efficient way of mirroring to a remote system via various options over a LAN or the Internet.
8. Disk Partitioning: In theory you could use no partitions at all, but it is recommended you create at least one (GRUB prefers it). Rest of the drive though can be BTRFS subvolumes that you can resize on the fly without unmounting or using a LiveCD.
9. Very large VM files: You can add them to separate subvolumes you create, and have them act as independent files without copy (remember you are still backing up aren’t you).
10. Conversion from other filesystems (ext2, ext3, ext4, reiserfs) to btrfs: Copy on write algorithms allow BTRFS to preserve an unmodified copy of the original FS, and allow administrator to undo the conversion, even after making changes in the resulting BTRFS filesystem.
11. Linux kernel includes BTRFS support so no need to install drivers, just the software utility apps to manage it.

So doing my /home partition was quite easy to just do a conversion. I’ve not yet quite decided how to do my boot drive as I need to think about what I want to do with subvolume creation and what best practices I need to consider with inclusion of GRUB etc. A Clonezilla copy of my boot drive means I can experiment and quickly restore without worries, though.

Bild/Foto

#BTRFS #Linux #filesystem
#Blog

lorenzoancora@pod.mttv.it

Gain unprivileged access to an overlapped directory in Flatpak

Issue

/usr and other hierarchies on the host cannot be accessed from Flatpak, because they conflict with the sandbox. Instead, you are presented with a fake, overlapped filesystem hierarchy. Currently, Flatpak alone has no working options to solve this issue, as configuration overrides have no effect on those special filesystem hierarchies. As Linux does not support directory hard links, this is a serious nuisance!

Solution

Luckily, there is a workaround to safely access the original directory without having root access, if your sysadmin (or you, if you own the system) installed the bindfs package.

The bindfs command uses a FUSE filesystem to mirror the contents of a directory to another directory:

bindfs /overlapped ~/.overlapped

If high performance is needed:

bindfs -o multithreaded /overlapped ~/.overlapped

If security (read-only access) is needed:

bindfs -o ro /overlapped ~/.overlapped

Example

TASK: access the documentation on a Debian system from a Flatpak app.

user@localhost:~$ mkdir .doc
user@localhost:~$ bindfs -o ro,multithreaded /usr/share/doc .doc
user@localhost:~$ ls .doc

…will grant you fast, read-only access to /usr/share/doc by visiting .doc in your user home.
ls .doc will list the contents of /usr/share/doc, while .doc is not a symlink but a simple directory created by you.
You can now eg. use the Flatpak version of Mozilla Firefox to browse file:///home/yourusername/.doc and it will let you read the files in /usr/share/doc, which are normally inaccessible under Flatpak.

Note: this is not an official workaround, I've found by accident. If you know better alternatives please feel free to comment so other users can benefit. Thank you.


Tags: #linux #gnulinux #debian #flatpak #sandbox #virtualization #security #hacking #filesystem #fs #docs #sysadmin #sys #documentation

danie10@squeet.me

Bcachefs Might Be Ready For Upstreaming In Linux This Year – ‘The COW filesystem for Linux that won’t eat your data’

Bild/Foto
The Bcachefs file-system that was born out of the Linux kernel’s block cache code has over the past few years matured greatly. Now in 2022 the core fundamentals of the file-system are “pretty close to done” and will hopefully be mainlined this calendar year into the Linux kernel.

Bcachefs has been in development since the mid-2010s and aims for speed while having ZFS/Btrfs-like features. It’s been under heavy feature development and with time picking up features such as Btrfs-like snapshots or referred to as bad@$$ snapshots, among other promising feature work to allow it to compete as a next-gen file-system.

See https://www.phoronix.com/scan.php?page=news_item&px=Bcachefs-2022-Hopes

#technology #linux #filesystem #bcacheefs
#Blog, ##bacachefs, ##filesystem, ##linux, ##technology

canoodle@nerdpol.ch

THE most controversial filesytem in the known universe: ZFS - so ext4 is faster on single disk systems - btrfs with snapshots but without the zfs licensing problems

ZFS is probably THE most controversial filesytem in the known universe:

“FOSS means that effort is shared across organizations and lowers maintenance costs significantly” (src: comment by JohnFOSS on itsfoss.com)

“The whole purpose behind ZFS was to provide a next-gen filesystem for UNIX and UNIX-like operating systems.” (src: comment by JohnK3 on itsfoss.com)

“The performance is good, the reliability and protection of data is unparalleled, and the flexibility is great, allowing you to configure pools and their caches as you see fit. The fact that it is independent of RAID hardware is another bonus, because you can rescue pools on any system, if a server goes down. No looking around for a compatible RAID controller or storage device.”

“after what they did to all of SUN’s open source projects after acquiring them. Oracle is best considered an evil corporation, and anti-open source.”

“it is sad – however – that licensing issues often get in the way of the best solutions being used” (src: comment by mattlach on itsfoss.com)

“Zfs is greatly needed in Linux by anyone having to deal with very large amounts of data. This need is growing larger and larger every year that passes.” (src: comment by Tman7 on itsfoss.com)

“I need ZFS, because In the country were I live, we have 2-12 power-fails/week. I had many music files (ext4) corrupted during the last 10 years.” (src: comment by Bert Nijhof on itsfoss.com)

“some functionalities in ZFS does not have parallels in other filesystems. It’s not only about performance but also stability and recovery flexibility that drives most to choose ZFS.” (src: comment by Rubens on itsfoss.com)

“Some BtrFS features outperform ZFS, to the point where I would not consider wasting my time installing ZFS on anything. I love what BtrFS is doing for me, and I won’t downgrade to ext4 or any other fs. So at this point BtrFS is the only fs for me.” (src: comment by Russell W Behne on itsfoss.com)

“Btrfs Storage Technology: The copy-on-write (COW) file system, natively supported by the Linux kernel, implements features such as snapshots, built-in RAID, and self-healing via checksumming for data and metadata. It allows taking subvolume snapshots and supports offline storage migration while keeping snapshots. For users of enterprise storage systems, Btrfs provides file system integrity after unexpected power loss, helps prevent bitrot, and is designed for high-capacity and high-performance storage servers.” (src: storagereview.com)

BTRFS is GPL 3.0 licenced btw.

bachelor projects are written about btrfs vs zfs (2015)

so…

ext4 is good for notebooks & desktops & workstations (that do regular backups on a separate, external, then disconnected medium)

is zfs “better” on/for servers? (this user says: even on single disk systems, zfs is “better” as it prevents bit-rot-file-corruption)

with server-hardware one means:

  • computers with massive computational resources (CPUs, RAM & disks)
    • at least 2 disks for RAID1 (mirroring = safety)
    • or better: 4 disks for RAID10 (striping + mirroring = speed + safety)
  • zfs wants direct access to disks without any hardware raid controller or caches in between, so it is “fine” with simple SATA onboard connections or hba cards that do nothing but provide SATA / SAS / NVMe ports or hardware raid controllers that behave like hba cards (JBOD, some need firmware flashed, some need to be jumpered)
    • fun fact: this is not the default for servers. servers (usually) come with LSI (or other vendor) hardware raid cards, that might be possible to JBOD jumper or flash) but that would mean: zfs is only good for servers WITHOUT hardware raid cards X-D (and those are (currently still) rare X-D)
      • but they would be “perfect” fit for a consumer-hardware PC (having only SATA ports) used as server (many companies not only Google but also Proxmox and even Hetzner test out that way of operation, but it might not be the perfect fit for every admin, that rather spends some bucks extra and wants to provide companies with the most reliable hardware possible (redundant power supplies etc.)
      • maybe that is also a cluster vs mainframe “thinking”
        • so in a cluster, if some nodes fail, it does not matter, as other nodes take over and are replaced fast (but some server has to store the central database, that is not allowed to fail X-D)
        • in a non-cluster environment, things might be very different
  • “to EEC or not to EEC the RAM”, that is the question?:
    • zfs also runs on machines without EEC but:
      • in semi-professional purposes non-EEC might be okay
      • for companies with critical data maximum error correction EEC is a must (as magnetic fields / sunflares could potentially flip some bits in RAM, then write the faulty data back to disk, ZFS can not correct that)
      • “authors of a 2010 study that examined the ability of file systems to detect and prevent data corruption, with particular focus on ZFS, observed that ZFS itself is effective in detecting and correcting data errors on storage devices, but that it assumes data in RAM is “safe”, and not prone to error”
      • “One of the main architects of ZFS, Matt Ahrens, explains there is an option to enable checksumming of data in memory by using the ZFS_DEBUG_MODIFY flag (zfs_flags=0x10) which addresses these concerns.[73]” (wiki)

zfs: snapshots!

zfs has awesome features such as:

many more featuers:

  • Protection against data corruption. Integrity checking for both data and metadata.
  • Continuous integrity verification and automatic “self-healing” repair
    • Data redundancy with mirroring, RAID-Z1/2/3 [and DRAID]
  • Support for high storage capacities — up to 256 trillion yobibytes (2^128 bytes)
  • Space-saving with transparent compression using LZ4, GZIP or ZSTD
  • Hardware-accelerated native encryption
  • Efficient storage with snapshots and copy-on-write clones
  • Efficient local or remote replication — send only changed blocks with ZFS send and receive

(src)

how much space do snapshots use?

look at WRITTEN, not at USED.

https://ytpak.net/watch?v=NXg86uBDSqI

https://ytpak.net/watch?v=NXg86uBDSqI

https://papers.freebsd.org/2019/bsdcan/ahrens-how_zfs_snapshots_really_work/

performance?

so on a single-drive system, performance wise ext4 is what the user wants.

on multi-drive systems, the opposite might be true, zfs outperforming ext4.

it is a filesystem + a volumen manager! 🙂

“is not necessary nor recommended to partition the drives before creating the zfs filesystem” (src, src of src)

http://perftuner.blogspot.com/2017/02/zfs-zettabyte-file-system.html

http://perftuner.blogspot.com/2017/02/zfs-zettabyte-file-system.html

RAID10?

there is no raid10 in zfs, only raid5, which means: at least one disk is used for checksums

  • “raid5 or raidz distributes parity along with the data
    • can lose 1x physical drive before a raid failure.
    • Because parity needs to be calculated raid 5 is slower then raid0, but raid 5 is much safer.
    • RAID 5 requires at least 3x hard disks in which one(1) full disk of space is used for parity.
  • raid6 or raidz2 distributes parity along with the data
    • can lose 2x physical drives instead of just one like raid 5.
    • Because more parity needs to be calculated raid 6 is slower then raid5, but raid6 is safer.
    • raidz2 requires at least 4x disks and will use two(2) disks of space for parity.
  • raid7 or raidz3 distributes parity just like raid 5 and 6
    • but raid7 can lose 3x physical drives.
    • Since triple parity needs to be calculated raid 7 is slower then raid5 and raid 6, but raid 7 is the safest of the three.
    • raidz3 requires at least 4x, but should be used with no less then 5x disks, of which 3x disks of space are used for parity.
  • raid10 or raid1+0 is mirroring and striping of data.
    • The simplest raid10 array has 4x disks and consists of two pairs of mirrors.
    • Disk 1 and 2 are mirrors and separately disk 3 and 4 are another mirror.
    • Data is then striped (think raid0) across both mirrors.
    • One can lose one drive in each mirror and the data is still safe.
    • One can not lose both drives which make up one mirror, for example drives 1 and 2 can not be lost at the same time.
    • Raid 10 ‘s advantage is reading data is fast.
    • The disadvantages are the writes are slow (multiple mirrors) and capacity is low.”

(src, src)

ZFS supports SSD/NVMe caching + RAM caching:

more RAM is better than an dedicated SSD/NVMe cache, BUT zfs can do both! which is remarkable.

(the optimum probably being RAM + SSD/NVMe caching)

ubuntu makes zfs the default filesystem

ZFS & Ubuntu 20.04 LTS

“our ZFS support with ZSys is still experimental.”

https://ubuntu.com/blog/zfs-focus-on-ubuntu-20-04-lts-whats-new

ZFS licence problems/incompatibility with GPL 2.0 #wtf Oracle! again?

Linus: “And honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it’s ok to do so and treat the end result as GPL’d.” (itsfoss.com)

comment by vagrantprodigy: “Another sad example of Linus letting very limited exposure to something (and very out of date, and frankly, incorrect information about it’s licensing) impact the Linux world as a whole. There are no licensing issues, OPENZFS is maintained, and the performance and reliability is better than the alternatives.” (itsfoss.com)

https://itsfoss.com/linus-torvalds-zfs/

https://itsfoss.com/linus-torvalds-zfs/

https://itsfoss.com/linus-torvalds-zfs/

https://itsfoss.com/linus-torvalds-zfs/

it is Open Source, but not GPL licenced: for Linus, that’s a no go and quiet frankly, yes it is a problem.

“this article missed the fact that CDDL was DESIGNED to be incompatible with the GPL” (comment by S O on itsfoss.com)

it can also be called “bait”

“There is always a thing called “in roads”, where it can also be called “bait”.

“The article says a lot in this respect.

“That Microsoft founder Bill Gate comment a long time ago was that “nothing should be for free.”

That too rings out loud, especially in today’s American/European/World of “corporate business practices” where they want what they consider to be their share of things created by others.

Just to be able to take, with not doing any of the real work.

That the basis of the GNU Gnu Pub. License (GPL) 2.0 basically says here it is, free, and the Com. Dev. & Dist.

License (CDDL) 1.0 says use it for free, find our bugs, but we still have options on its use, later on downstream.

..

And nothing really is for free, when it is offered by some businesses, but initial free use is one way to find all the bugs, and then begin charging costs.

And it it has been incorporated into a linux distribution, then the linux distribution could later come to a legal halt, a legal gotcha in a court of law.

In this respect, the article is a good caution to bear in mind, that the differences in licensing can have consequences, later in time.Good article to encourage linux users to also bear in mind, that using any programs that are not GNU Gen. Pub. License (GPL) 2.0 can later on have consequences for use having affect on a lot of people, big time.

That businesses (corportions have long life spans) want to dominate markets with their products, and competition is not wanted.

So, how do you eliminate or hinder the competition?

… Keep Linux free as well as free from legal downstream entanglements.”

(comment by Bruce Lockert on itsfoss.com)

Imagine this: just as with Java, Oracle might decide to change the licence on any day Oracle seems fit to “cash in” on the ZFS users and demand purchasing a licence… #wtf Oracle

Guess one is not alone with that thinking: “Linus has nailed the coffin of ZFS! It adds no value to open source and freedom. It rather restricts it. It is a waste of effort. Another attack at open source. Very clever disguised under an obscure license to trap the ordinary user in a payed environment in the future.” (comment by Tuxedo on itsfoss.com)

GNU Linux Debian warns during installation:

“Licenses of OpenZFS and Linux are incompatible”

  • OpenZFS is licensed under the Common Development and Distribution License (CDDL), and the Linux kernel is licensed under the GNU General Public License Version 2 (GPL-2).
  • While both are free open source licenses they are restrictive licenses.
  • The combination of them causes problems because it prevents using pieces of code exclusively available under one license with pieces of code exclusively available under the other in the same binary.
  • You are going to build OpenZFS using DKMS in such a way that they are not going to be built into one monolithic binary.
  • Please be aware that distributing both of the binaries in the same media (disk images, virtual appliances, etc) may lead to infringing.

“You cannot change the license when forking (only the copyright owners can), and with the same license the legal concerns remain the same. So forking is not a solution.” (comment by MestreLion on itsfoss.com)

OpenZFS 2.0

“This effort is fast-forwarding delivery of advances like dataset encryption, major performance improvements, and compatibility with Linux ZFS pools.” (src: truenas.com)

https://arstechnica.com/gadgets/2020/12/openzfs-2-0-release-unifies-linux-bsd-and-adds-tons-of-new-features/

tricky.

of course users can say “haha” “accidentally deleted millions of files” “no backups” “now snapshots would be great”

or come up with a smart file system, tha can do snapshots.

how to on GNU Linux Debian 11:

https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/index.html

https://wiki.debian.org/ZFS

note:

with ext4 it was recommended to put GNU Linux /root and /swap on a dedicated SSD/NVMe (that then regularly backs up to the larger raid10)

but than the user would miss out on the zfs awesome restore snapshot features, which would mean:

  • no more fear of updates
    • take snapshot before update
    • do system update (moving between major versions of Debian 9 -> 10 can be problematic, sometimes it works, sometimes it will not)
    • test the system according to list of use cases (“this used to work, this too”)
    • if update breaks stuff -> boot from a usb stick -> roll back snapshot (YET TO BE TESTED!)

Links:

https://openzfs.org/wiki/Main_Page

#linux #gnu #gnulinux #opensource #administration #sysops #zfs #openzfs #filesystem #filesystems #ubuntu #btrfs #ext4 #gnu-linux #oracle #licence

Originally posted at: https://dwaves.de/2022/01/20/the-most-controversial-filesytem-in-the-known-universe-zfs-so-ext4-is-faster-on-single-disk-systems-btrfs-with-snapshots-but-without-the-zfs-licensing-problems/

canoodle@nerdpol.ch

so ext4 is good for notebooks & desktops & workstations, zfs is better on servers?

so, ext4 is good for notebooks & desktops & workstations (that do regular backups on a separate, external, then disconnected medium) is zfs "better" on/for servers? (this user says: even on single disk systems, zfs is "better" as it prevents bit-rot-file-corruption) with[...]

#linux #gnu #gnulinux #opensource #administration #sysops #zfs #openzfs #filesystem #filesystems #ubuntu

Originally posted at: https://dwaves.de/2022/01/20/so-ext4-is-good-for-notebooks-desktops-workstations-zfs-is-better-on-servers/

deusfigendi@pod.geraspora.de

Vor vielen vielen Jahren habe ich mich mal sehr geärgert, dass ich auf meinem Computer eine Datei nicht anlegen konnte, die con.con heißen sollte.
Ich hatte damals ein Programm geschrieben, welches halt verschiedene Dateien anlegt und benennt in Abhängigkeit zu ihrem Inhalt etc. etc. besonders geärgert hatte mich auch, dass es keinen Fehler gab, meinem Programm wurde signalisiert, dass es erfolgreich die Datei geschrieben hätte und fertig. Nur lesen konnte ich sie später nicht, weil sie nicht da war.

Ich habe mich sehr geärgert, sehr und in Internetforen rumgeschimpft was für ein Mist das wäre etc. irgendwann hat mich irgendwer auf ein Microsoft-Dokument hingewiesen, in dem tatsächlich definiert war, dass man keine Datei con nennen darf (und auf Windows sind con und Con und CON und CoN ja auch das selbe).
Na immerhin war es dokumentiert, ABER WARUM BEKOMME ICH KEINE FEHLERMELDUNG beklagte ich.

Jetzt bin ich über dieses Video gestolpert https://www.youtube.com/watch?v=bC6tngl0PTI welches mir die Sache ordentlich erklärt.

Und falls ihr keinen Bock habt euch das anzusehen (es ist ausländisch) hier die Erklärung in aller Kürze:
* DOS hatte ein ähnliches Konzept wie Linux' "everything is a file" das heißt: Devices hatten eine Datei-Repräsentation (so wie /dev/tty2)
* CON war so eine Datei-Repräsentation für eine console
* Dateinamens-Erweiterungen (also das .con was ich benutzt oder .txt oder so) sind nicht so richtig Teil des Dateinamens (und daher unabhängig davon zu betrachten)
* So und weil CON so ein file-device-Dingsi ist gibt es auch keine Fehlermeldung, es ist ja völlig korrekt dass ich "nach console" schreiben kann.

Der ganze "Vorfall" ist jetzt äääh 15 Jahre her oder so und JETZT, JETZT ENDLICH hat es mir jemand so erklärt dass es auch Sinn ergibt :D. Bis heute war es für mich so ein "ja geht nicht, weil Windows".

#Windows #Microsoft #Operatingsystem #Betriebssystem #Softwaredevelopment #Softwareentwicklung #Fehler #Error #con #problem #Erklärung #explanation #filesystem #Dateisystem #youtube #filedevice

mischerh@pluspora.com

automount CIFS share with autofs

This HowTo will prepare a Linux client to automatically mount CIFS shares from a remote Samba server on access/demand. Since I am mounting different filesystems, I have structured my mountpoints as follows:

<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">/
├── home
│   ├── USER
│   │   ├── mnt
│   │   │   ├── cifs
│   │   │   │   ├── smb-server-a.fqdn
│   │   │   │   │   ├── share-a
│   │   │   │   │   ├── share-b
│   │   │   │   │   └── share-c
│   │   │   │   ├── smb-server-b.fqdn
│   │   │   │   │   ├── share-b
│   │   │   │   │   ├── share-b
│   │   │   │   │   └── share-c
│   │   │   ├── sshfs
│   │   │   │   ├── ssh-server-a.fqdn

From here on, I will use “mysambaserver.local” as the Samba servers FQDN, “mysambaserver” as its hostname, “myusername” as my username, “mygroup” as group and “myPassWord” as the password.

At time of writing, the server is running Ubuntu 18.04.4 LTS and the client is running Ubuntu 20.04.1 LTS.

This HowTo got compiled by trial and error and from these sources:

Install Required packages, check supported filesystems

Install the required packages on the client (gigolo is just “nice to have”) and check if its kernel supports CIFS.

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">apt-get install autofs cifs-utils smbclient gigolo gvfs-backends gvfs-fuse fuse
ls -1 /lib/modules/$(uname -r)/kernel/fs | grep "cifs"
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">cifs

Check remote SAMBA connection

Check remote connection to the Samba server:

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">smbclient -N -L //<mysambaserver.local>/
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter"> 

        Sharename       Type      Comment

        ---------       ----      -------

        share-a         Disk      Share A

        share-b         Disk      Share B

        share-c         Disk      Share C

        IPC$            IPC       IPC Service (mysambaserver server (Samba, Ubuntu))

SMB1 disabled -- no workgroup available

Check authenticated login

Check an authenticated remote login. If the command line asks for a password, enter the SMB password which is configured for the user at the Samba server (via smbpasswd).

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">smbclient -U <myusername> -L //<mysambaserver.local>/
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">Enter WORKGROUP\myusername's password: [myPassWord] <--- enter the password


        Sharename       Type      Comment

        ---------       ----      -------

        share-a         Disk      Share A

        share-b         Disk      Share B

        share-c         Disk      Share C

        IPC$            IPC       IPC Service (mysambaserver server (Samba, Ubuntu))

SMB1 disabled -- no workgroup available

Create mount point

Create the mount point in the users home directory:

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">mkdir -pv /home/<myusername>/mnt/cifs
chown -R <myusername>:<mygroup> /home/<myusername>/mnt/

autofs configuration

The configuration consists of the master map file (/etc/auto.master), the corresponding map file (/etc/auto.mysambaserver-cifs) and the key file which contains the credentials for authentication.

In the following we will configure autofs to mount shares to /home/<myusername>/mnt/cifs/<mysambaserver.local>/<share-name>.

Map file

Create the mapfile

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">vim /etc/auto.<mysambaserver>-cifs
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="enlighter">#!/bin/bash
# $Id$
# This file must be executable to work! chmod 755!
set -x
KEY="${1}"
# Note: create a cred file for each windows/Samba-Server in your network
#       which requires password authentification.  The file should contain
#       exactly two lines:
#          username=user
#          password=*****
#       Please don't use blank spaces to separate the equal sign from the
#       user account name or password.
CREDFILE="/etc/autofs/keys/${KEY}"
# # !!!!!!!!!!!!!!!!! PAY ATTENTION TO the CIFS VERSION in MOUNTOPTS !!!!!!!!!!!!!!!!!!!!!!!!!!!
# https://www.raspberrypi.org/forums/viewtopic.php?t=201727 # https://www.raspberrypi.org/forums/viewtopic.php?t=211987
# http://krisko210.blogspot.com/2016/06/autofs-automount-nfs-share.html
# Note: Use cifs instead of smbfs:
MOUNTOPTS="-fstype=cifs,file_mode=0644,dir_mode=0755,nounix,uid=1000,gid=1000"
SMBCLIENTOPTS=""
for EACH in /bin /sbin /usr/bin /usr/sbin
do
        if [ -x $EACH/smbclient ]
        then
                SMBCLIENT=$EACH/smbclient
                break
        fi
done
[ -x $SMBCLIENT ] || exit 1
if [ -e "${CREDFILE}" ]
then
        MOUNTOPTS=$MOUNTOPTS",credentials=${CREDFILE}"
        SMBCLIENTOPTS="-A "$CREDFILE
else
        SMBCLIENTOPTS="-N"
fi
$SMBCLIENT $SMBCLIENTOPTS -gL "${KEY}" 2>/dev/null \
   | awk -v key="$KEY" -v opts="${MOUNTOPTS}" -F'|' -- '
        BEGIN   { ORS=""; first=1 }
        /Disk/  { if (first) { print opts; first=0 };
                  gsub(/ /, "\\ ", $2);
                  sub(/\$/, "\\$", $2);
                  print " \\\n\t /" $2, "://" key "/" $2 }
        END     { if (!first) print "\n"; else exit 1 }
        '
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">chmod 755 /etc/auto.<mysambaserver>-cifs

This file is a slightly modified version of the file auto.smb which usually comes as part of the autofs package. You need to modify the line defining the mountopts above and change userid and groupid to the uid/gid of your personal account.

key file

Now you have to give autofs the credentials needed to access shares on your network. To do this create a key file

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">mkdir -pv /etc/autofs/keys/
vim /etc/autofs/keys/<mysambaserver.local>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="enlighter">username=<myusername>
password=<myPassWord>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false">chown root:root /etc/autofs/keys/<mysambaserver.local>
chmod 600 /etc/autofs/keys/<mysambaserver.local>

Master-Map

The maps to be monitored are specified in this file.

Execute the following command to append the line “/home/myusername/mnt/cifs /etc/auto.mysambaserver-cifs –timeout=60” at the end of the /etc/auto.master file:

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">echo "/home/<myusername>/mnt/cifs /etc/auto.<mysamabaserver>-cifs --timeout=60 --ghost" >>/etc/auto.master

The syntax here is: <Directory> <Map-File> [Parameter]

The shares should be unmounted after an inactivity of 60 seconds (–timeout=60) and empty directories should be created for the individual shares before mounting (–ghost).

Debugging

for debugging output stop the daemon and interactively start autofs with verbose output enabled

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">service autofs stop
automount -f -v

You can restart autofs with

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">service autofs start

Test

Use the following command to test if your setup is working

<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">ls -als /home/<myusername>/mnt/cifs/<mysambaserver.fqdn>/<share-name>

#autofs #cifs #filesystem #howto #linux #mount #mounting #samba #smb #ubuntu

Originally posted at: https://www.nanoscopic.de/2020/09/automount-cifs-share-with-autofs/

sigsleep@nerdpol.ch

Hi, here is an initial proposal to build a tahoe-lafs based distributed filesystem.
I could only find the test grid for testing pruposes, some commercial service, and a kind of closed-abandonned small community driven network.
So I would find interesting to build a community-based tahoe-lafs grid.

If there are people also interested in participating to start to deploy/run a secure distributed filesystem, please have a look at my quick draft here, and we'll figure out how to start.
A couple of people from different AS would be helpful as to ensure to distribute over different networks.

This is an initial call, for any ideas welcome.
Goal would be to provide such a running filesystem for the free use of any participating people, instead of stickig to a commercial one, or being forced to do isolated self-hosting for such type of data.

CAVEAT : if I forgot an existing project, please correct me and we can join that project then also.

https://www.mbuf.net/tahoe-lafs/
#tahoe-lafs #p2p #distributed #filesystem #cloud #security #encryption