How about we add another new permission! This is not a hardware permission, but a software permission. The only permission separations we have are kernel vs userland, and then read, write, and execute.
Permissions necessary, and any other operation should fault. The fundamental concept is that an object should only have the Eventually W^X came to some of our kernel address spaces also. We just kept making changes in the same vein. The first one we handled was the signal trampoline at the top of the stack. We applied this concept object by object, trying to seperate objects with different qualities to different pages. Pages that are writable, should not be executable. Switch to using arcbuf size(buf), instead of psize, which will return psize or lsize, depending on the result of ARCBUF_COMPRESSED(buf).Īlmost 2 decades ago we started work on W^X. 9321 arc loancompressed buf() can increment arcloaned bytes by the wrong valueĪrcloan compressedbuf() increments arc loanedbytes by psize unconditionally In the case of zfs compressedarc enabled=0, when the buf is returned via arcreturn buf(), if ARCBUF COMPRESSED(buf) is false, then arcloaned bytes is decremented by lsize, not psize.These stats are updated extremely frequently, and can consume a significant amount of CPU time. In pursuit of improving performance on multi-core systems, we should implements fanned out counters and use them to improve the performance of some of the arc statistics. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”.Ĩ484 Implement aggregate sum and use for arc counters She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn’t corrupt your data). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%). In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.) We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed. With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. Headlines ĩ188 increase size of dbuf cache to reduce indirect block decompression
Lucas on sponsor gifts, TCP blackbox recorder, and Dell disk system hacking.
New ZFS features landing in FreeBSD, MAP_STACK for OpenBSD, how to write safer C code with Clang’s address sanitizer, Michael W.