If you've ever needed to react to file system changes in Java — a config file updating, an upload folder receiving a new file, a hot-reload mechanism — you've probably reached for Java's WatchService. It's clean. It's built-in. And it hides a subtle concurrency trap that will burn you in production if you're not paying attention.
In this post I'll walk through building a directory watcher using the Observer pattern, and then show exactly where race conditions creep in and how to shut them down with a ReentrantLock.
The Observer Pattern in 10 Seconds
Observer is a behavioural pattern where one object (the subject) maintains a list of dependents (observers) and notifies them automatically when its state changes.
In our case:
- Subject — the directory watcher, watching for file events
- Observers — any number of handlers that react when a file changes
public interface FileChangeObserver {
void onFileChanged(Path filePath);
}
public class DirectoryWatcher {
private final List<FileChangeObserver> observers = new ArrayList<>();
public void addObserver(FileChangeObserver observer) {
observers.add(observer);
}
private void notifyObservers(Path path) {
for (FileChangeObserver observer : observers) {
observer.onFileChanged(path);
}
}
}
Clean and extensible — add as many handlers as you need without touching the watcher itself.
Setting Up WatchService
Java NIO gives us WatchService — a low-level file system event API.
public void watch(Path directory) throws IOException, InterruptedException {
WatchService watchService = FileSystems.getDefault().newWatchService();
directory.register(watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_MODIFY,
StandardWatchEventKinds.ENTRY_DELETE);
while (true) {
WatchKey key = watchService.take(); // blocks until an event arrives
for (WatchEvent<?> event : key.pollEvents()) {
Path changed = directory.resolve((Path) event.context());
notifyObservers(changed);
}
key.reset();
}
}
This works perfectly — until you run it in a multi-threaded environment.
Where the Race Condition Hides
Say you spin up a thread pool to process file change events faster:
ExecutorService executor = Executors.newFixedThreadPool(4);
for (WatchEvent<?> event : key.pollEvents()) {
Path changed = directory.resolve((Path) event.context());
executor.submit(() -> notifyObservers(changed));
}
Now four threads can be notifying observers simultaneously. If two events arrive for the same file at nearly the same time — say a file is written and then immediately modified — two threads can call onFileChanged on the same path concurrently.
Depending on what your observer does (write to a database, process the file, update a cache), you now have a race condition. Two threads reading and transforming the same file simultaneously. Silent data corruption. The worst kind of bug.
Fixing It With ReentrantLock
A ReentrantLock lets only one thread process a given file path at a time while other threads wait their turn.
public class DirectoryWatcher {
private final List<FileChangeObserver> observers = new ArrayList<>();
private final Map<Path, ReentrantLock> fileLocks = new ConcurrentHashMap<>();
private ReentrantLock getLockForPath(Path path) {
return fileLocks.computeIfAbsent(path, p -> new ReentrantLock());
}
private void notifyObservers(Path path) {
ReentrantLock lock = getLockForPath(path);
lock.lock();
try {
for (FileChangeObserver observer : observers) {
observer.onFileChanged(path);
}
} finally {
lock.unlock(); // always release in finally — never skip this
}
}
}
Key points:
ConcurrentHashMapgives you one lock per file path — threads processing different files don't block each other, only threads processing the same file docomputeIfAbsentis atomic — no two threads will create two locks for the same path- The
finallyblock guarantees the lock is released even if an observer throws an exception
Why Not Just Use synchronized?
You could use a synchronized block on the path object, but ReentrantLock gives you more control — you can use tryLock() to skip processing if the lock is already held (useful if you want to drop duplicate events rather than queue them) and it's more explicit about what you're protecting.
// Skip instead of queue — useful for high-frequency file events
if (lock.tryLock()) {
try {
notifyObservers(path);
} finally {
lock.unlock();
}
} else {
System.out.println("Skipping duplicate event for: " + path);
}
The Full Picture
File system event
↓
WatchService
↓
Thread pool (4 threads)
↓
notifyObservers(path)
↓
ReentrantLock (per path)
↓
Observers notified safely
The Observer pattern keeps your handlers decoupled and easy to extend. The per-path locking ensures concurrent events on the same file are serialised without bottlenecking events on different files.
This pattern came up directly in production work — building file ingestion pipelines where multiple events on the same file within milliseconds of each other would otherwise cause partial reads and corrupt downstream processing.
I write about backend Java engineering, Spring Boot, and systems design. Follow for more.