4 Likes
1 Shares
as my volume was full, and i wanted to keep my several years old snapshots i needed to solve the issue diffrently. Disappontingly i did not find any usable solution so i made it myself:
may it be of great use for other btrfs folks out there..
#linux #admin #filesystem #server #floss #gpl #freeculture #ilovefs
source: https://www.wiz.io/blog/ubuntu-overlayfs-vulnerability
CVE-2023-2640 and CVE-2023-32629 were found in the #OverlayFS module in Ubuntu, which is a widely used Linux #filesystem that became highly popular with the rise of containers as its features enable the deployment of dynamic filesystems based on pre-built images. OverlayFS serves as an attractive attack surface as it has a history of numerous logical vulnerabilities that were easy to exploit. This makes the new discovered vulnerabilities especially risky given the exploits for the past OverlayFS vulnerabilities work out of the box without any changes.
#security #os #software #update #bug #problem #news #exploit #hack #hacker #server #vulnerability
According to the lead author the calling it Butter-FS originated because it comes from CoW which stands for copy-on-write, but many call it Better-FS, and it actually stands for B-tree file system. It’s been around since 2007, so is relatively new versus other file systems, but the feeling seems to be now that it is finally quite stable. The exception being for new features still being developed. I’m a novice still at BTRFS but these are some of my reasons for wanting to fully move across to it.
Potential downsides (yes it is no magic bullet) are that it can be slightly slower due to compression and validations, and this supposedly is especially not best for large databases. The noatime option can be used to disable the Linux ‘access time’ write to every file read. Also it will use a bit more space as it copies updated data to new sectors instead of overwriting data like ext4 and others do. Although there is RAID functionality, BTRFS is not actually doing backups (it mirrors). You still need to backup off-site and to other media. There is no in-built filesystem encryption (it is planned though), but you can use other standards, but these could potentially affect some advantages of BTRFS eg. using raw block devices.
But the advantages may well outweigh the disadvantages:
1. Copy-on-Write: Any existing data being edited or updated, is left untouched on the drive, which means potentially less loss, and way easier and quicker to reliably roll back to a previous state.
2. Snapshots: Manual or automated, are extremely quick as they do not recopy all previously existing data. It is essentially a snapshot of the metadata of the status of files. Where this is done for the boot drive, such snapshots can be configured to appear automatically in the GRUB boot menu, for quickly reverting back to a previous version. No manual booting from LiveCD, chrooting, Clonezilla restores needed. Suse produced an excellent Snapper app for managing snapshots, but Timeshift also supports them.
3. Software RAID: What stands out is that drives need not be matched sizes at all. You can also add to a running system, and just rebalance BTRFS. BTRFS rebuilds involve only the blocks actively used by the file system, so rebuilds are much quicker than most other systems.
4. Self-healing: Checksums for data and metadata, automatic detection of silent data corruptions. Checksums are verified each time a data block is read from disk.
5. Three different compression options: ZLIB, LZO, and ZSTD differ in terms of speed and amount of compression. You can compress only new files, or process the whole file system, or just do specific individual files if you wish. Compression is supported on a per mount basis. If compression makes the file size any bigger than the original, then the Btrfs filesystem will, by default, not compress that file.
6. Utilities: Scrub for validating checksums, defragmenting while subvolumes are mounted. Check (unmounted drives) is similar to fsck. Balancing for adding new drives to a RAID or other changes made to BTRFS.
7. Send/Receive of subvolume changes: Very efficient way of mirroring to a remote system via various options over a LAN or the Internet.
8. Disk Partitioning: In theory you could use no partitions at all, but it is recommended you create at least one (GRUB prefers it). Rest of the drive though can be BTRFS subvolumes that you can resize on the fly without unmounting or using a LiveCD.
9. Very large VM files: You can add them to separate subvolumes you create, and have them act as independent files without copy (remember you are still backing up aren’t you).
10. Conversion from other filesystems (ext2, ext3, ext4, reiserfs) to btrfs: Copy on write algorithms allow BTRFS to preserve an unmodified copy of the original FS, and allow administrator to undo the conversion, even after making changes in the resulting BTRFS filesystem.
11. Linux kernel includes BTRFS support so no need to install drivers, just the software utility apps to manage it.
So doing my /home partition was quite easy to just do a conversion. I’ve not yet quite decided how to do my boot drive as I need to think about what I want to do with subvolume creation and what best practices I need to consider with inclusion of GRUB etc. A Clonezilla copy of my boot drive means I can experiment and quickly restore without worries, though.
/usr
and other hierarchies on the host cannot be accessed from Flatpak, because they conflict with the sandbox. Instead, you are presented with a fake, overlapped filesystem hierarchy. Currently, Flatpak alone has no working options to solve this issue, as configuration overrides have no effect on those special filesystem hierarchies. As Linux does not support directory hard links, this is a serious nuisance!
Luckily, there is a workaround to safely access the original directory without having root access, if your sysadmin (or you, if you own the system) installed the bindfs
package.
The bindfs
command uses a FUSE filesystem to mirror the contents of a directory to another directory:
bindfs /overlapped ~/.overlapped
If high performance is needed:
bindfs -o multithreaded /overlapped ~/.overlapped
If security (read-only access) is needed:
bindfs -o ro /overlapped ~/.overlapped
TASK: access the documentation on a Debian system from a Flatpak app.
user@localhost:~$ mkdir .doc
user@localhost:~$ bindfs -o ro,multithreaded /usr/share/doc .doc
user@localhost:~$ ls .doc
…will grant you fast, read-only access to /usr/share/doc
by visiting .doc
in your user home.
ls .doc
will list the contents of /usr/share/doc
, while .doc
is not a symlink but a simple directory created by you.
You can now eg. use the Flatpak version of Mozilla Firefox to browse file:///home/yourusername/.doc
and it will let you read the files in /usr/share/doc
, which are normally inaccessible under Flatpak.
Note: this is not an official workaround, I've found by accident. If you know better alternatives please feel free to comment so other users can benefit. Thank you.
Tags: #linux #gnulinux #debian #flatpak #sandbox #virtualization #security #hacking #filesystem #fs #docs #sysadmin #sys #documentation
The Bcachefs file-system that was born out of the Linux kernel’s block cache code has over the past few years matured greatly. Now in 2022 the core fundamentals of the file-system are “pretty close to done” and will hopefully be mainlined this calendar year into the Linux kernel.
Bcachefs has been in development since the mid-2010s and aims for speed while having ZFS/Btrfs-like features. It’s been under heavy feature development and with time picking up features such as Btrfs-like snapshots or referred to as bad@$$ snapshots, among other promising feature work to allow it to compete as a next-gen file-system.
See https://www.phoronix.com/scan.php?page=news_item&px=Bcachefs-2022-Hopes
#technology #linux #filesystem #bcacheefs
#Blog, ##bacachefs, ##filesystem, ##linux, ##technology
“FOSS means that effort is shared across organizations and lowers maintenance costs significantly” (src: comment by JohnFOSS on itsfoss.com)
“The whole purpose behind ZFS was to provide a next-gen filesystem for UNIX and UNIX-like operating systems.” (src: comment by JohnK3 on itsfoss.com)
“The performance is good, the reliability and protection of data is unparalleled, and the flexibility is great, allowing you to configure pools and their caches as you see fit. The fact that it is independent of RAID hardware is another bonus, because you can rescue pools on any system, if a server goes down. No looking around for a compatible RAID controller or storage device.”
“after what they did to all of SUN’s open source projects after acquiring them. Oracle is best considered an evil corporation, and anti-open source.”
“it is sad – however – that licensing issues often get in the way of the best solutions being used” (src: comment by mattlach on itsfoss.com)
“Zfs is greatly needed in Linux by anyone having to deal with very large amounts of data. This need is growing larger and larger every year that passes.” (src: comment by Tman7 on itsfoss.com)
“I need ZFS, because In the country were I live, we have 2-12 power-fails/week. I had many music files (ext4) corrupted during the last 10 years.” (src: comment by Bert Nijhof on itsfoss.com)
“some functionalities in ZFS does not have parallels in other filesystems. It’s not only about performance but also stability and recovery flexibility that drives most to choose ZFS.” (src: comment by Rubens on itsfoss.com)
“Some BtrFS features outperform ZFS, to the point where I would not consider wasting my time installing ZFS on anything. I love what BtrFS is doing for me, and I won’t downgrade to ext4 or any other fs. So at this point BtrFS is the only fs for me.” (src: comment by Russell W Behne on itsfoss.com)
“Btrfs Storage Technology: The copy-on-write (COW) file system, natively supported by the Linux kernel, implements features such as snapshots, built-in RAID, and self-healing via checksumming for data and metadata. It allows taking subvolume snapshots and supports offline storage migration while keeping snapshots. For users of enterprise storage systems, Btrfs provides file system integrity after unexpected power loss, helps prevent bitrot, and is designed for high-capacity and high-performance storage servers.” (src: storagereview.com)
BTRFS is GPL 3.0 licenced btw.
bachelor projects are written about btrfs vs zfs (2015)
ext4 is good for notebooks & desktops & workstations (that do regular backups on a separate, external, then disconnected medium)
is zfs “better” on/for servers? (this user says: even on single disk systems, zfs is “better” as it prevents bit-rot-file-corruption)
with server-hardware one means:
zfs has awesome features such as:
many more featuers:
(src)
look at WRITTEN, not at USED.
https://papers.freebsd.org/2019/bsdcan/ahrens-how_zfs_snapshots_really_work/
so on a single-drive system, performance wise ext4 is what the user wants.
on multi-drive systems, the opposite might be true, zfs outperforming ext4.
“is not necessary nor recommended to partition the drives before creating the zfs filesystem” (src, src of src)
http://perftuner.blogspot.com/2017/02/zfs-zettabyte-file-system.html
there is no raid10 in zfs, only raid5, which means: at least one disk is used for checksums
more RAM is better than an dedicated SSD/NVMe cache, BUT zfs can do both! which is remarkable.
(the optimum probably being RAM + SSD/NVMe caching)
“our ZFS support with ZSys is still experimental.”
https://ubuntu.com/blog/zfs-focus-on-ubuntu-20-04-lts-whats-new
Linus: “And honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it’s ok to do so and treat the end result as GPL’d.” (itsfoss.com)
comment by vagrantprodigy: “Another sad example of Linus letting very limited exposure to something (and very out of date, and frankly, incorrect information about it’s licensing) impact the Linux world as a whole. There are no licensing issues, OPENZFS is maintained, and the performance and reliability is better than the alternatives.” (itsfoss.com)
it is Open Source, but not GPL licenced: for Linus, that’s a no go and quiet frankly, yes it is a problem.
“this article missed the fact that CDDL was DESIGNED to be incompatible with the GPL” (comment by S O on itsfoss.com)
“There is always a thing called “in roads”, where it can also be called “bait”.
“The article says a lot in this respect.
“That Microsoft founder Bill Gate comment a long time ago was that “nothing should be for free.”
That too rings out loud, especially in today’s American/European/World of “corporate business practices” where they want what they consider to be their share of things created by others.
Just to be able to take, with not doing any of the real work.
That the basis of the GNU Gnu Pub. License (GPL) 2.0 basically says here it is, free, and the Com. Dev. & Dist.
License (CDDL) 1.0 says use it for free, find our bugs, but we still have options on its use, later on downstream.
..
And nothing really is for free, when it is offered by some businesses, but initial free use is one way to find all the bugs, and then begin charging costs.
And it it has been incorporated into a linux distribution, then the linux distribution could later come to a legal halt, a legal gotcha in a court of law.
In this respect, the article is a good caution to bear in mind, that the differences in licensing can have consequences, later in time.Good article to encourage linux users to also bear in mind, that using any programs that are not GNU Gen. Pub. License (GPL) 2.0 can later on have consequences for use having affect on a lot of people, big time.
That businesses (corportions have long life spans) want to dominate markets with their products, and competition is not wanted.
So, how do you eliminate or hinder the competition?
… Keep Linux free as well as free from legal downstream entanglements.”
(comment by Bruce Lockert on itsfoss.com)
Imagine this: just as with Java, Oracle might decide to change the licence on any day Oracle seems fit to “cash in” on the ZFS users and demand purchasing a licence… #wtf Oracle
Guess one is not alone with that thinking: “Linus has nailed the coffin of ZFS! It adds no value to open source and freedom. It rather restricts it. It is a waste of effort. Another attack at open source. Very clever disguised under an obscure license to trap the ordinary user in a payed environment in the future.” (comment by Tuxedo on itsfoss.com)
GNU Linux Debian warns during installation:
“Licenses of OpenZFS and Linux are incompatible”
“You cannot change the license when forking (only the copyright owners can), and with the same license the legal concerns remain the same. So forking is not a solution.” (comment by MestreLion on itsfoss.com)
“This effort is fast-forwarding delivery of advances like dataset encryption, major performance improvements, and compatibility with Linux ZFS pools.” (src: truenas.com)
tricky.
of course users can say “haha” “accidentally deleted millions of files” “no backups” “now snapshots would be great”
or come up with a smart file system, tha can do snapshots.
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/index.html
with ext4 it was recommended to put GNU Linux /root and /swap on a dedicated SSD/NVMe (that then regularly backs up to the larger raid10)
but than the user would miss out on the zfs awesome restore snapshot features, which would mean:
https://openzfs.org/wiki/Main_Page
#linux #gnu #gnulinux #opensource #administration #sysops #zfs #openzfs #filesystem #filesystems #ubuntu #btrfs #ext4 #gnu-linux #oracle #licence
Originally posted at: https://dwaves.de/2022/01/20/the-most-controversial-filesytem-in-the-known-universe-zfs-so-ext4-is-faster-on-single-disk-systems-btrfs-with-snapshots-but-without-the-zfs-licensing-problems/
so ext4 is good for notebooks & desktops & workstations, zfs is better on servers?
so, ext4 is good for notebooks & desktops & workstations (that do regular backups on a separate, external, then disconnected medium) is zfs "better" on/for servers? (this user says: even on single disk systems, zfs is "better" as it prevents bit-rot-file-corruption) with[...]
#linux #gnu #gnulinux #opensource #administration #sysops #zfs #openzfs #filesystem #filesystems #ubuntu
Originally posted at: https://dwaves.de/2022/01/20/so-ext4-is-good-for-notebooks-desktops-workstations-zfs-is-better-on-servers/
Vor vielen vielen Jahren habe ich mich mal sehr geärgert, dass ich auf meinem Computer eine Datei nicht anlegen konnte, die con.con
heißen sollte.
Ich hatte damals ein Programm geschrieben, welches halt verschiedene Dateien anlegt und benennt in Abhängigkeit zu ihrem Inhalt etc. etc. besonders geärgert hatte mich auch, dass es keinen Fehler gab, meinem Programm wurde signalisiert, dass es erfolgreich die Datei geschrieben hätte und fertig. Nur lesen konnte ich sie später nicht, weil sie nicht da war.
Ich habe mich sehr geärgert, sehr und in Internetforen rumgeschimpft was für ein Mist das wäre etc. irgendwann hat mich irgendwer auf ein Microsoft-Dokument hingewiesen, in dem tatsächlich definiert war, dass man keine Datei con
nennen darf (und auf Windows sind con und Con und CON und CoN ja auch das selbe).
Na immerhin war es dokumentiert, ABER WARUM BEKOMME ICH KEINE FEHLERMELDUNG beklagte ich.
Jetzt bin ich über dieses Video gestolpert https://www.youtube.com/watch?v=bC6tngl0PTI welches mir die Sache ordentlich erklärt.
Und falls ihr keinen Bock habt euch das anzusehen (es ist ausländisch) hier die Erklärung in aller Kürze:
* DOS hatte ein ähnliches Konzept wie Linux' "everything is a file" das heißt: Devices hatten eine Datei-Repräsentation (so wie /dev/tty2
)
* CON war so eine Datei-Repräsentation für eine console
* Dateinamens-Erweiterungen (also das .con was ich benutzt oder .txt oder so) sind nicht so richtig Teil des Dateinamens (und daher unabhängig davon zu betrachten)
* So und weil CON so ein file-device-Dingsi ist gibt es auch keine Fehlermeldung, es ist ja völlig korrekt dass ich "nach console" schreiben kann.
Der ganze "Vorfall" ist jetzt äääh 15 Jahre her oder so und JETZT, JETZT ENDLICH hat es mir jemand so erklärt dass es auch Sinn ergibt :D. Bis heute war es für mich so ein "ja geht nicht, weil Windows".
#Windows #Microsoft #Operatingsystem #Betriebssystem #Softwaredevelopment #Softwareentwicklung #Fehler #Error #con #problem #Erklärung #explanation #filesystem #Dateisystem #youtube #filedevice
automount CIFS share with autofs
This HowTo will prepare a Linux client to automatically mount CIFS shares from a remote Samba server on access/demand. Since I am mounting different filesystems, I have structured my mountpoints as follows:
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">/
├── home
│ ├── USER
│ │ ├── mnt
│ │ │ ├── cifs
│ │ │ │ ├── smb-server-a.fqdn
│ │ │ │ │ ├── share-a
│ │ │ │ │ ├── share-b
│ │ │ │ │ └── share-c
│ │ │ │ ├── smb-server-b.fqdn
│ │ │ │ │ ├── share-b
│ │ │ │ │ ├── share-b
│ │ │ │ │ └── share-c
│ │ │ ├── sshfs
│ │ │ │ ├── ssh-server-a.fqdn
From here on, I will use “mysambaserver.local” as the Samba servers FQDN, “mysambaserver” as its hostname, “myusername” as my username, “mygroup” as group and “myPassWord” as the password.
At time of writing, the server is running Ubuntu 18.04.4 LTS and the client is running Ubuntu 20.04.1 LTS.
This HowTo got compiled by trial and error and from these sources:
Install the required packages on the client (gigolo is just “nice to have”) and check if its kernel supports CIFS.
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">apt-get install autofs cifs-utils smbclient gigolo gvfs-backends gvfs-fuse fuse
ls -1 /lib/modules/$(uname -r)/kernel/fs | grep "cifs"
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">cifs
Check remote connection to the Samba server:
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">smbclient -N -L //<mysambaserver.local>/
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">
Sharename Type Comment
--------- ---- -------
share-a Disk Share A
share-b Disk Share B
share-c Disk Share C
IPC$ IPC IPC Service (mysambaserver server (Samba, Ubuntu))
SMB1 disabled -- no workgroup available
Check an authenticated remote login. If the command line asks for a password, enter the SMB password which is configured for the user at the Samba server (via smbpasswd).
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">smbclient -U <myusername> -L //<mysambaserver.local>/
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false" data-enlighter-theme="enlighter">Enter WORKGROUP\myusername's password: [myPassWord] <--- enter the password
Sharename Type Comment
--------- ---- -------
share-a Disk Share A
share-b Disk Share B
share-c Disk Share C
IPC$ IPC IPC Service (mysambaserver server (Samba, Ubuntu))
SMB1 disabled -- no workgroup available
Create the mount point in the users home directory:
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">mkdir -pv /home/<myusername>/mnt/cifs
chown -R <myusername>:<mygroup> /home/<myusername>/mnt/
The configuration consists of the master map file (/etc/auto.master), the corresponding map file (/etc/auto.mysambaserver-cifs) and the key file which contains the credentials for authentication.
In the following we will configure autofs to mount shares to /home/<myusername>/mnt/cifs/<mysambaserver.local>/<share-name>.
Create the mapfile
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">vim /etc/auto.<mysambaserver>-cifs
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="enlighter">#!/bin/bash
# $Id$
# This file must be executable to work! chmod 755!
set -x
KEY="${1}"
# Note: create a cred file for each windows/Samba-Server in your network
# which requires password authentification. The file should contain
# exactly two lines:
# username=user
# password=*****
# Please don't use blank spaces to separate the equal sign from the
# user account name or password.
CREDFILE="/etc/autofs/keys/${KEY}"
# # !!!!!!!!!!!!!!!!! PAY ATTENTION TO the CIFS VERSION in MOUNTOPTS !!!!!!!!!!!!!!!!!!!!!!!!!!!
# https://www.raspberrypi.org/forums/viewtopic.php?t=201727 # https://www.raspberrypi.org/forums/viewtopic.php?t=211987
# http://krisko210.blogspot.com/2016/06/autofs-automount-nfs-share.html
# Note: Use cifs instead of smbfs:
MOUNTOPTS="-fstype=cifs,file_mode=0644,dir_mode=0755,nounix,uid=1000,gid=1000"
SMBCLIENTOPTS=""
for EACH in /bin /sbin /usr/bin /usr/sbin
do
if [ -x $EACH/smbclient ]
then
SMBCLIENT=$EACH/smbclient
break
fi
done
[ -x $SMBCLIENT ] || exit 1
if [ -e "${CREDFILE}" ]
then
MOUNTOPTS=$MOUNTOPTS",credentials=${CREDFILE}"
SMBCLIENTOPTS="-A "$CREDFILE
else
SMBCLIENTOPTS="-N"
fi
$SMBCLIENT $SMBCLIENTOPTS -gL "${KEY}" 2>/dev/null \
| awk -v key="$KEY" -v opts="${MOUNTOPTS}" -F'|' -- '
BEGIN { ORS=""; first=1 }
/Disk/ { if (first) { print opts; first=0 };
gsub(/ /, "\\ ", $2);
sub(/\$/, "\\$", $2);
print " \\\n\t /" $2, "://" key "/" $2 }
END { if (!first) print "\n"; else exit 1 }
'
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">chmod 755 /etc/auto.<mysambaserver>-cifs
This file is a slightly modified version of the file auto.smb which usually comes as part of the autofs package. You need to modify the line defining the mountopts above and change userid and groupid to the uid/gid of your personal account.
Now you have to give autofs the credentials needed to access shares on your network. To do this create a key file
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">mkdir -pv /etc/autofs/keys/
vim /etc/autofs/keys/<mysambaserver.local>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="enlighter">username=<myusername>
password=<myPassWord>
<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-linenumbers="false">chown root:root /etc/autofs/keys/<mysambaserver.local>
chmod 600 /etc/autofs/keys/<mysambaserver.local>
The maps to be monitored are specified in this file.
Execute the following command to append the line “/home/myusername/mnt/cifs /etc/auto.mysambaserver-cifs –timeout=60” at the end of the /etc/auto.master file:
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">echo "/home/<myusername>/mnt/cifs /etc/auto.<mysamabaserver>-cifs --timeout=60 --ghost" >>/etc/auto.master
The syntax here is: <Directory> <Map-File> [Parameter]
The shares should be unmounted after an inactivity of 60 seconds (–timeout=60) and empty directories should be created for the individual shares before mounting (–ghost).
for debugging output stop the daemon and interactively start autofs with verbose output enabled
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">service autofs stop
automount -f -v
You can restart autofs with
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">service autofs start
Use the following command to test if your setup is working
<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-linenumbers="false">ls -als /home/<myusername>/mnt/cifs/<mysambaserver.fqdn>/<share-name>
#autofs #cifs #filesystem #howto #linux #mount #mounting #samba #smb #ubuntu
Originally posted at: https://www.nanoscopic.de/2020/09/automount-cifs-share-with-autofs/
Hi, here is an initial proposal to build a tahoe-lafs based distributed filesystem.
I could only find the test grid for testing pruposes, some commercial service, and a kind of closed-abandonned small community driven network.
So I would find interesting to build a community-based tahoe-lafs grid.
If there are people also interested in participating to start to deploy/run a secure distributed filesystem, please have a look at my quick draft here, and we'll figure out how to start.
A couple of people from different AS would be helpful as to ensure to distribute over different networks.
This is an initial call, for any ideas welcome.
Goal would be to provide such a running filesystem for the free use of any participating people, instead of stickig to a commercial one, or being forced to do isolated self-hosting for such type of data.
CAVEAT : if I forgot an existing project, please correct me and we can join that project then also.
https://www.mbuf.net/tahoe-lafs/
#tahoe-lafs #p2p #distributed #filesystem #cloud #security #encryption