It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
dtgreene: These days, I'd probably recommend exFAT if you need such a partition. (The former recommendation was fat32, but that doesn't support large files or partitions.)
If it isn't for large filenames, yeah. Otherwise depending on the size of the partition, Fat16 or Fat32 are still contenders (Fat12... yeah that iteration is fairly useless, almost better to use mkisofs or squashfs). On a ramdrive i've seen huge speedgains using Fat32 simply because it's a simpler Filesystem and it probably doesn't have to do any permission checks.

On linux. Hmmm... Depends on the usecase. I've got some premade HUGE ext2 ramdisks that take up like 80k while unmounted and dormant, though they rely on zram in the instances i am extracting heavily redundant data for momentary processing. And the default tmpfs usually suffices.

The main FS the ext4 or whatever is the current, for shared might Samba it, for thumbdrives yeah same as you, Fat32 for small/medium drives (up to 16Gb) and exFat on everything else. Fat16 on up to 2Gb and Fat12 on up to 256Mb (Like you'll see those around anymore....)

Reminds me. A while back i had this 240Gb external, which was formatted to Fat32, and i couldn't transfer a bunch of files that were larger (think they were 5Gb each) during a backup on a computer. So i created enough 3-4Gb files and mounted it using zfs/RAID. I could copy files between the two linux computers. A workaround when you can't just flush/format the drive. That or 7zip and split into multiple files.
I'm on dual boot Linux Mint/Windows 11.

Why Mint : simply has Heroic installed, depends games what I need run, I use : Hroic Launcher or Lutris. The games of GOG runs fine like that. And Mint use Flatpak and official repositories. Except Pipewire et Wireplumber were a nightmare to configure, my system runs fine without problem.... on Linux (on Windows, hum....Anyway).
Time past for configure for play : 2 hours approx. (Why I've installed Windows : simply because often games better runs on Windows instead Proton/Wine (less resource used)).
For share Infos between OS ? Linux has access to Windows partitions, Windows not, for questions of security.
avatar
EverNightX: Because that's how rolling release distros (Arch being a popular example) work.
You seem to be used to using point releases like Debian/Ubuntu/Mint/Fedora.
Generally, I'd say: choose a distro with a schedule that fits your intended use. The quicker the release cycle, the shorter the supported lifetime.
- Rolling release for daily use.
- Point releases + interim releases for regular use.
- Long-term support releases (LTS) for sporadic use and/or focus on system stability.

avatar
timppu: Is there a way to update to some interim level, and then to the latest versions, in order to tackle the problem you described, ie. the delta being too big as you haven't updated for ages, hence the update to the latest version fails?
I like the idea of rolling updates because generally I dislike the end-of-life (EOL) of point releases (...).
But the problem is what you described: sometimes I may have an old Linux installation on some old laptop I haven't fired up for many many months, and certainly I'd like to be able to run the updates on it.
Anyway, anyone can reply who knows the answer.
In that one instance mentioned, out of curiosity, I installed a rolling release Manjaro into a VM. But it ended up as a rarely used tool, only fired up to perform a specific task. Therefore, I never bothered much with updates. That was 100% my mistake. At the end, this installation was stuck in a deathloop of an unsupported kernel, outdated package sources and an incomplete keyring to validate new package signatures. (Someone more experienced with Arch could surely have salvaged the situation. But I didn't care that much and just nuked the VM.)

Ubuntu:
I found out that Ubuntu moves old releases / repositories into an archive. You can still access them but have to edit your system's sources list manually.
http://archive.ubuntu.com/ubuntu/dists/
But consider staying on LTS releases if you have difficulties with the quicker release cycle of interim releases.

avatar
EverNightX: I do not believe (at least in the case of Arch) you can request an update to particular point in time.
(...)
Not an Arch expert. But I found this:
https://wiki.archlinux.org/title/Arch_Linux_Archive
quoting:

The Arch Linux Archive (a.k.a ALA), formerly known as Arch Linux Rollback Machine (a.k.a ARM), stores official repositories snapshots, iso images and bootstrap tarballs across time.
You can use it to:
- Downgrade to a previous version of one package (last version is broken, I want the previous one)
- Restore all your packages at a precise moment (my system is broken, I want to go back 2 months ago)
- Find a previous version of an ISO image
Packages are only kept for a few years, afterwards they are moved to the Arch Linux Historical Archive on archive.org.
Post edited March 28, 2024 by g2222
avatar
rtcvb32: On linux. Hmmm... Depends on the usecase. I've got some premade HUGE ext2 ramdisks that take up like 80k while unmounted and dormant, though they rely on zram in the instances i am extracting heavily redundant data for momentary processing. And the default tmpfs usually suffices.
tmpfs has some really nice advantages, to the point where I wouldn't recommend using any other filesystem in RAM unless it needs to have the exact layout of an on-disk filesystem or you actually need some feature tmpfs does not provide.

Thing is, for most (physical) filesystems, any access to the filesystem will go through the cache before it hits the disk. So, if you write to an in-memory ext2 image, the write will first go to the cache and then to the actual image; conversely, reading will check the cache, and if not there, will need to read from the image (which means that cached data is duplicated twice in memory). For tmpfs, on the other hand, that's not necessary; the reads and writes will access the cache, but, unless the page is swapped out to disk, that's as far as it needs to go. (For example, if a file is written to tmpfs, and the OS decides it never needs to swap that page to disk, then the file will *never* be written anywhere other than the cache.)

Incidentally, there's also ramfs, which is even simpler; the filesystem cache is all there is, and it never gets evicted from memory. Its use isn't recommended in practice because it's easy to run out of physical memory. (It *might* be useful, however, if you have data that must never be written to disk, but I don't know how secure that is.)
avatar
timppu: -That is a lot of extra steps for what should be a simple process.-
And this is why my display manager is TBSM, so I don't have to fight a bloated graphical program just to select a different Desktop or DE.

I just invoke TBSM, type "18", and off to WindowMaker we go. It's mostly pure Bash, so there's very little to go wrong; and it can invoke most X and Wayland sessions, as long as they've got a line in a text file somewhere.
avatar
rtcvb32: (Fat12... yeah that iteration is fairly useless, almost better to use mkisofs or squashfs)
But can you fit either of those filesystems on a floppy disk? What if it's necessary to write to that floppy?

avatar
rtcvb32: So i created enough 3-4Gb files and mounted it using zfs/RAID. I could copy files between the two linux computers. A workaround when you can't just flush/format the drive. That or 7zip and split into multiple files.
You don't need zfs for that.

In fact, you could just use lvm, or even use the device mapper directly. Alternatively, mdraid could work. Much simpler than zfs, doesn't require tainting the kernel with a GPL2-incompatible module, and doesn't waste resources for features that are entirely useless in this context.

You could also just use the split utility to split the files, then use cat on the destination computer to reconstruct the file.

One other trick I've read about: If there is nothing you care about on the drive, you could just tar (and compress, if necessary) the files and write out the archive directly to the device node (that is, use something like /dev/sdX as the file). Do note that this will delete everything on that drive, and that the resulting flash drive won't have a valid filesystem (so you'll need to reformat if you're going to do that manually). Also, make sure you're using the correct device node, and not, say, your computer's main hard drive. The advantage of this approach is that it avoids the filesystem overhead. Note that you'll need to get root access to do this; I don't recommend adding the "sudo" to the start of the command until you know you have it right.

And, of course, if the computers are on the same network, you can use netcat. (Also works over the Internet, though you may need to be careful of firewall rules, and the transmission is not encrypted; if you want encryption and security, use sftp.)
Post edited March 28, 2024 by dtgreene
avatar
rtcvb32: (Fat12... yeah that iteration is fairly useless, almost better to use mkisofs or squashfs)
avatar
dtgreene: But can you fit either of those filesystems on a floppy disk? What if it's necessary to write to that floppy?
???
Why are we talking about floppy disks now? :-P
avatar
rtcvb32: (Fat12... yeah that iteration is fairly useless, almost better to use mkisofs or squashfs)
avatar
dtgreene: But can you fit either of those filesystems on a floppy disk? What if it's necessary to write to that floppy?
Yes. Raw dd works just fine, and from when i tried it. Likely internally when mounting linux does some conversions for the sector sizes that are 512 vs 2048, or just sees the drive as a singular file and accesses accordingly; But yeah it worked when i played with it.

That's what i love about linux. A filesystem is just a format, and a storage medium is just a storage medium, and you can mix and match :)

But in the event it's fickle and doesn't work, you can always dd the contents back off and then mount it as a filesystem or extract the files. Saves you the overhead FAT12 would lose, plus file compression as a bonus.

Edit: Reminds me, which i'm credit for in the notes for SquashFS, the filesystem just wouldn't work correctly on really small sector sizes (some bug that just zeroized the data), but at 1024 or 2048 the problems went away. So setting it to match at 512 bytes is unlikely.

avatar
dtgreene: You don't need zfs for that.

In fact, you could just use lvm, or even use the device mapper directly.
Well i was throwing on a solution on the fly and didn't know another way to do it. But since i knew zfs was intended for multiple volumes to do effectively RAID-0, i just did that. Plus zfs adds protections like if you specify/mount them in the wrong order, it fixes itself.

avatar
dtgreene: But can you fit either of those filesystems on a floppy disk? What if it's necessary to write to that floppy?
avatar
g2222: ???
Why are we talking about floppy disks now? :-P
FAT12 was a stripped version of FAT16 which was sufficient for floppies. At it's absolute max size, FAT12 could support just under 256Mb drives (assuming 64k sector blocks, as going higher wouldn't have been supported on DOS machines due to the 32-64k limits without special drivers).

Actually i hope Floppies come back, but 1.44Mb isn't big enough, needs to be at least 100Mb like a ZipDisk. Though i think a lot of people would think it's not worth it... i suppose USB Flash disks are the wave of the future. Which is kinda too bad.
Post edited March 28, 2024 by rtcvb32
avatar
rtcvb32: Actually i hope Floppies come back, but 1.44Mb isn't big enough, needs to be at least 100Mb like a ZipDisk. Though i think a lot of people would think it's not worth it... i suppose USB Flash disks are the wave of the future. Which is kinda too bad.
Wha??? In what way is a floppy disc not inferior? It's slower, more fragile, uses more power, and holds less.
USB Flash is not the wave of the future. It's been around a long time.
Post edited March 28, 2024 by EverNightX
avatar
rtcvb32: Actually i hope Floppies come back, but 1.44Mb isn't big enough, needs to be at least 100Mb like a ZipDisk. Though i think a lot of people would think it's not worth it... i suppose USB Flash disks are the wave of the future. Which is kinda too bad.
avatar
EverNightX: Wha??? In what way is a floppy disc not inferior? It's slower, more fragile, uses more power, and holds less.
USB Flash is not the wave of the future. It's been around a long time.
I agree they are inferior, and the last time i tried to put data on a disc for some reason it had so many errors it was unreadable. (Bad drive? or bad disks? Not sure).

But there's many times i want to just transfer a handful of small files, and the hardware doesn't guarantee you'll have access to the USB drive, especially in windows. So many times in a new install i've gotten stuck because networking drivers weren't on, the USB drivers weren't working (because the default drivers weren't recognizing them), and i literally can't do anything except either take the drive out and put it in another computer to transfer some files and HOPE it works, or boot in some other OS on a thumbdrive and hope i got the right drivers copied and hopefully i can get them installed.

A Floppy Disk, believe it or not, always worked. The system always recognized it, it always read/wrote to it, and it was one button you pushed to eject the disk instantly vs fiddling with the top or back of the case to find that USB drive especially when you can't see what you're doing and hope you aren't trying to plug it into the built in HDMI instead.

No i'm not saying i want Floppies to be the main method of transferring files, but dammit it would make a lot of things far easier 90% of the time when i needed it.

Plus, disks were kinda disposable. Copy your homework on a floppy, put your name on it and drop it in the teacher's inbox. Poof! No waiting for it to copy and getting it back, no worrying about if they might find the hidden folder with your playboy cover downloads, and you can hang it on the wall almost as an art piece to a part in time that used it a lot while a thumb-drive is so tiny it would be confused for a large fly today, and not the iconic 'save' icon used in most programs.

In many ways, a floppy is like wanting to have an 8bit computer again, seeing what you could do with so little, and a little nostalgia.
avatar
dtgreene: But can you fit either of those filesystems on a floppy disk? What if it's necessary to write to that floppy?
avatar
g2222: ???
Why are we talking about floppy disks now? :-P
Because they mentioned fat12, which is primarily intended for floppy disks, so any filesystem that could replace fat12 in that niche would have to work on a floppy.

(Note that fat12 actually does date back to when floppy disks were commonly used.)
avatar
rtcvb32: Actually i hope Floppies come back, but 1.44Mb isn't big enough, needs to be at least 100Mb like a ZipDisk. Though i think a lot of people would think it's not worth it... i suppose USB Flash disks are the wave of the future. Which is kinda too bad.
I remember a time when floppy disks were basically obsolete, but USB flash drives weren't mainstream. CD-Rs were mainstream, and while they were good for saving data, they had the problem of not being re-writable (and CD-RWs needed specialized tools to rewrite, just like CD-Rs needed special tools; you couldn't just drag and drop a file). There were zip drives, jazz drives, and LS-120 superdisks that appeared as solutions, but none of them caught on.
Post edited March 28, 2024 by dtgreene
avatar
dtgreene: I remember a time when floppy disks were basically obsolete, but USB flash drives weren't mainstream. CD-Rs were mainstream, and while they were good for saving data, they had the problem of not being re-writable (and CD-RWs needed specialized tools to rewrite, just like CD-Rs needed special tools; you couldn't just drag and drop a file). There were zip drives, jazz drives, and LS-120 superdisks that appeared as solutions, but none of them caught on.
Each with issues, namely price (Floppies, free sometimes or $1 a disc, Zipdisk, $20-30 a disk). Plus the hardware (Another $60-$100).

Burnable discs certainly are one solution, but when you are paying something like 50 cents a disc, you don't want to just drop a 20Mb file and not use the rest of the space. I'm aware they had a thing for a while of non-closed sessions. That means the disc worked as normal, and you could add more to it, effectively adding more sectors and it would point to a different TOC for the updated files or newer files. Close the session and the disc was done. I never tried the open sessions.

As for CD-RW's.... i never used those, so i'm not sure. It seemed like a gamble to me.

I suppose the next step up would be SD cards. They are standard, you can get micro SD's plus a converter to SD (last 10 32-128Gb ones i got were micro, and included the converter), a card reader is $12-$30 and you just put one in or pull it out... If it's supported by the motherboard and can be used without drivers it would be a disk-like replacement to some degree.
Okay I would say this topic has been successfully de-railed, if a mod or blue wants to lock it go right ahead.

Was just looking for some advice and I just get people bickering between each other over pointless things.
avatar
wolfsite: Okay I would say this topic has been successfully de-railed, if a mod or blue wants to lock it go right ahead.

Was just looking for some advice and I just get people bickering between each other over pointless things.
Oh I did not realize we had to be laser focused on just your needs and can't talk about anything that you might consider pointless. I guess we can't talk amongst ourselves and it has to be 100% about what you care about or the thread should be shutdown after you got what you wanted.
Post edited March 28, 2024 by EverNightX
Edited:

Just please lock the thread mod or Blue. The thread was clearly DE-railed.
Post edited March 28, 2024 by wolfsite