Options

Large number of files in root of drive..

mzgavcmzgavc Member Posts: 75 ■■□□□□□□□□
We have a network drive that has over 60,000 files at the root. Its tied into a database that saves directly to this directory...

Lately people have been getting permission errors when trying to save the file to the folder... but permissions are set correctly. It also takes a long time to open the folder *7-8* seconds. After a few seconds, they can save the file if they try again.

My question is this...

Am I correct in assuming that while the folder is loading/indexing, its set to a read only state and that the reason they can't save is because of this... or has anyone else come across this problem and had it be something else?

Comments

  • Options
    Danman32Danman32 Member Posts: 1,243
    I believe the root directory has a limitation on how many entries it can have. At least this is true with FAT systems, I have to check with NTFS.

    Few files should actually be in root. That's a bad security, organizational and technical move.
    Try reorganizing files to subfolders.
  • Options
    TheShadowTheShadow Member Posts: 1,057 ■■■■■■□□□□
    it is 112 for fat12, 512 for fat16 and theoretically 65534 for fat32 but I don't thing you get to use all 32 bits so it may be slightly lower. I don't believe NTFS has a value or it is a very high number. Could be wrong on the NTFS however.
    Who knows what evil lurks in the heart of technology?... The Shadow DO
  • Options
    JDMurrayJDMurray Admin Posts: 13,031 Admin
    I believe that NTFS4 and 5 both have a limit of 4 billion (4,294,967,296) total folders and files per folder, including the root folder of any volume.
  • Options
    TheShadowTheShadow Member Posts: 1,057 ■■■■■■□□□□
    icon_lol.gif Thanks for the assist JD. I guess we can call 4 billion a large enough number. So other than it is bad form to use the root for general storage, it should not be a problem unless he is using FAT32 along with long file names.
    Who knows what evil lurks in the heart of technology?... The Shadow DO
  • Options
    blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    Whoever coded that application to save files at the root of the network drive needs to be taken to the gallows and horsewhipped. Personally, I try not to put ANY files in the root, though it's not always possible.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • Options
    JDMurrayJDMurray Admin Posts: 13,031 Admin
    It may be a configuration issue rather than a coding issue. In that case, the admin that configured that application to save files at the root of the network drive needs to be taken to the gallows and horsewhipped. icon_wink.gif
  • Options
    rcooprcoop Member Posts: 183
    Root of a "network" drive? Is this a network share (SMB/CIFS) on another computer. If so, I don't think it shares the same "root" limitations imposed by a non-network file system such as FAT/FAT32/NTFS.

    I recently experienced something similar, where a program was trying to write files to a directory that had over 100,000 files in it. I'm not sure when it would start detecting write errors, but they didn't happen until there were a lot of files in the directory. The process for also writing the file would take longer and longer as more and more files were added to the directory. When looking at the code, there were a few file system calls (using UNC paths) to this network share, and these were causing the huge performance hits and eventual errors being returned. I could see this behavior even when typing the UNC path at a workstations Run command. I reworked our application to limit its file system calls, including removing some folder-level calls that ended up looking at every file (I'm assuming to detect if it is a folder or a file), and was able to get certain routines down from about a minute (per 50 documents) to around 5 seconds (per 50 documents), and the write errors stopped happening.

    Although we are passing these files (image files + metadata files) to a backend process that we don't control, I did recommend that the use of foldering and folder recursion reading be built into the application that reads these files.

    If in fact this is a network share you are talking about, I would have them build some logging to determine at what number of files does this start to happen, if foldering the files is a possibility, optimize the network segement, and if there are any FileSystemObject calls when writing the files to the network drive.

    In my testing for the above case, locally on an NTFS system, it was tough to reproduce the issue as the access times were around 7 seconds (for the 50 documents), it was only when accessing a (CIFS) drive across a network did I start to see this issue.

    Not sure if any of that helps, but wanted to share my experience, and confirm that someone has seen this issue with network drives.

    Take Care,
    RCoop
    Working on MCTS:SQL Server 2005 (70-431) & Server+
  • Options
    blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    Well, that does make sense, when you open a folder wouldn't Windows have to enumerate all of the objects to display them? It does seem that a larger number of files in a folder makes it take longer to open.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • Options
    JDMurrayJDMurray Admin Posts: 13,031 Admin
    It depends on what is actually "opening" the folder. The actual operating system call that opens a folder for access does not read the contents of the folder. After it is opened, the folder would then be queried to check how many file systems objects are logically contained within the folder. This will give a clue if the folder contains some huge number of file system objects to deal with.

    The actual enumeration (listing) of file system objects is performed using a callback procedure and (hopefully) a background worker thread so the listing can be stopped at any time. In other words, a properly written program will not be forced to list out all 1,000,000+ objects in a folder; it can simply list the first few hundred objects and then continue listing objects as the user browses the folder.

    These are the kinds of things that software developers ponder whilst sitting upon the toilet. icon_wink.gif
Sign In or Register to comment.