How to delete this indelible directory?

One way to delete files/direcories like this is by their inode-reference.

To find the inodes for elements in current dir:

ls -i
14813568 mikeaâcnt

To delete this:

find . -inum 14813568 -delete

The following excerpt from this essay potentially explains why that directory refuses to be deleted:

NFSv4 requires that all filenames be exchanged using UTF-8 over the wire. The NFSv4 specification, RFC 3530, says that filenames should be UTF-8 encoded in section 1.4.3: “In a slight departure, file and directory names are encoded with UTF-8 to deal with the basics of internationalization.” The same text is also found in the newer NFS 4.1 RFC (RFC 5661) section 1.7.3. The current Linux NFS client simply passes filenames straight through, without any conversion from the current locale to and from UTF-8. Using non-UTF-8 filenames could be a real problem on a system using a remote NFSv4 system; any NFS server that follows the NFS specification is supposed to reject non-UTF-8 filenames. So if you want to ensure that your files can actually be stored from a Linux client to an NFS server, you must currently use UTF-8 filenames. In other words, although some people think that Linux doesn’t force a particular character encoding on filenames, in practice it already requires UTF-8 encoding for filenames in certain cases.

UTF-8 is a longer-term approach. Systems have to support UTF-8 as well as the many older encodings, giving people time to switch to UTF-8. To use “UTF-8 everywhere”, all tools need to be updated to support UTF-8. Years ago, this was a big problem, but as of 2011 this is essentially a solved problem, and I think the trajectory is very clear for those few trailing systems.

Not all byte sequences are legal UTF-8, and you don’t want to have to figure out how to display them. If the kernel enforces these restrictions, ensuring that only UTF-8 filenames are allowed, then there’s no problem... all the filenames will be legal UTF-8. Markus Kuhn’s utf8_check C function can quickly determine if a sequence is valid UTF-8.

The filesystem should be requiring that filenames meet some standard, not because of some evil need to control people, but simply so that the names can always be displayed correctly at a later time. The lack of standards makes things harder for users, not easier. Yet the filesystem doesn’t force filenames to be UTF-8, so it can easily have garbage.


You should not use non-ASCII characters in the command line since as you could see, for some reason, they won't necessarily correspond to the filename (Unicode has various ways for expressing accented letters). Something like:

rm -rf mike*

should work since the filename is directly generated by the shell. But make sure there's only one match (do an echo mike* first to confirm).

Well, if cd works, then there's no reason why rm or ls should say No such file or directory, so that the problem may be at the file system level.

Note: Do not use ls to find whether a directory is empty, but ls -a.

The directory may still be used by another process (including if it's the cwd of some process). IMHO, that's why it still "exists" but can yield errors, e.g. with ls; lsof may give you some information, but with NFS, you need to find which machine uses it. Especially with NFS, this can yield strange errors. ls -a in the parent directory could show you .nfs* files/directories in some cases.

When you get:

$ ls
ls: cannot access mikeaâcnt: No such file or directory
mikeaâ??cnt

I suspect that the file still exists in the directory table due to NFS caching and/or because it is used by another process, but without associated information. When ls tries to get information on the file itself, it gets an error as the file itself no longer exists (it is only in the directory table), hence the displayed error. Then ls outputs the filename because it is in the directory table. The fact you have question marks in one case but not in the other case is due to a display bug of ls IMHO (unrelated to your problem).