Greetings! goredo (as apenwarr/redo, redo-c, baredo, redo-sh) expects that filesystem paths reference only a single "object", single entity it is aware of. To make hardlinks working, you have to change the whole architecture, where you explicitly expect single entity (under redo's control) to have possible multiple identifiers on filesystem. I agree with Spacefrogg's answers on that thread, emphasizing that any expectations on any certain behaviour of hardlinked files are just wrong (with current widely used implementations). You should redesign your targets and workflow to fit redo's expectation. Maybe some proxy/intermediate targets will help, maybe just do not track all of generated files and honestly expect defined (by your .do-files) behaviour, where the fact of successful "foo" target completion also means "foo.bar" file existence (although untracked). >There are two reasons for changing the behavior: > >1. The optimization is not improving the entire execution by a lot. Instead >of calling os.Rename, goredo calls os.Remove and os.Chtimes. Since both the >temporary file and the target are in the same directory, they're also on the >same drive, making os.Rename a cheap operation. Actually that optimisation *may* improve execution a lot when goredo is used on filesystems with active write-cache usage (like UFS with soft-updates or ZFS). With that optimisation: temporary file is created, then it is filled with the output, and then it is deleted -- everything related to it will be just dismissed from the write-cache and no real I/O is issued to the disk. Except for inode update, that is much more lightweight operation. Without that optimisation your disk is literally forced to create a new copy of the file, removing the old one, that is considerable amount of really issued I/O. You may notice that "if hsh == hshPrev" check is done before fsync() is called -- so that optimisation works even with REDO_NO_SYNC=0. I did that optimisation exactly because of high I/O rate and no files contents really changed. >An alternative suggestion is to change the function `isModified`. Currently >it only checks the ctime/mtime based on the value REDO_INODE_TRUST, but will >not check the file contents in case the time differs (this is the behavior >of `Inode.Equals`). Adding a hash check before returning the computed >modified value would decrease the number of false positives here, because >this is the reason why the warning about an externally modified file is >emitted in the first place. I'm not sure whether the hash check should be >done in Inode.Equals or outside of that function. Modification check is only intended to warn user about some unexpected events happened with the target under redo's observation and control. Target must be produced only by redo itself, under its tight control. If someone "external" modifies it, then in general that can be treated as undefined behaviour and wrong usage of the redo ecosystem itself. So even if file's content stays the same, but its inode is touched "outside" redo, then something wrong is already occurring. -- Sergey Matveev (http://www.stargrave.org/) OpenPGP: 12AD 3268 9C66 0D42 6967 FD75 CB82 0563 2107 AD8A