Wrote mini scripts (complex commands in console, tomorrow I need to put them into the script; commands are in the file /mnt/t3nfs01/data01/shome/dbrzhech/history_bash_20180620.log
) to analyse the batch jobs completeness: does a .root file was copied to the local area from a working node, does a .root file exists at my local path, etc. It is need because of huge number of running jobs, i.e the log files to analyse. So, with the help of these “mini scripts”, I've submitted the jobs that ran out of time (all.q queue for them was not enough, max time for all.q = 10h; the fraction of such jobs is not so big 38/~1100) and was not finished. I chose all.q yesterday as the fastest way to obtain results, because # of jobs that were running simultaneously in the long.q was 5-6 times less than in the all.q, but the number of files per job differs only by the factor of 3 (for long.q 15 files/job, for all.q - 5 files/job).