Hallo
I am using ploader 2.0.5b and piwigo 2.0.6
I set the chunk file size in ploader to 3000. ploader uploads then pieces of 2.9 KB to the Buffer Folder on the Server.
I was able to upload jpg-files which are about 5MB.
With my 12 MB files I had no chance. I always get a time out error.
What causes the time out. The number of files in the buffer or the size of the chunk-pieces?
After a the time out error the chunk files stay in the Buffer-Folder. I had to delete some x-thousend chunk-files after testing.
Offline
Before the official answer from pLoader developpers, I think the timeout is on the server side during the merge of chunks (or similar task). Generally servers are configured with 30 seconds elapsed time.
You found the limits with a 12MB file. Even pLoader team could bypass it now with a nice coded maneer, maybe in 3 years you would like to transfer a 60MB file, and it will fail again for the similar reasons.
Is there any good reasons to push such big files on a server?
Who can download that? (EG. on iPhone 3G connection?)
(Intranet: yes, but the internal network won't like that too).
Offline
Thnaks for your quick answer
I know the files are very big. But I have some clients they like to have a tool where they can easily acces and download high-res Files so that they can print a picture in the size of A3 at a resolution of 300 dpi.
That is the reason, why I try to upload such big files... (its 5600x3700pix thats what is coming out of a Canon EOS 5D Mark ll)
Offline
Sorry for missing this topic danibo. I think that VDigital is right when he says that the timeout occurs during chunks merge (timeout or out of memory I need to investigate).
I would like to perform some tests on your 12MB file. Can you upload it somewhere so that I can download it with an URL?
Offline
plg
I posted some files at
http://www.bilderlager.ch/clients/piwigo_upload_test/
regards
danibo
Offline
I've performed some tests with your 12MB file.
1) chunk size = 500 000 (the 12MB file is splitted into 25 chunks)
The "merge" task, corresponding to the last call to the web API, lasts:
* 0.15 second when the server is my computer (Core2Duo@2.4GHz, hard disk at 85MB/s)
* 0.20 second when the server is piwigo.com
2) chunk size = 3 000 (4151 chunks)
* 0.91 second on my computer
* 0.75 second on piwigo.com BUT IT FAILED, the md5sum of the merged file differs from the expected md5sum
And the memory usage on server side is 5MB maximum (a default PHP configuration is 16MB or 32MB).
I only made 1 test for each case, times are far below the 30 seconds limit.
A chunk size of 3000 is really low. Why did you decrease the default value that much?
The problem with such a low value for chunk size is that it greatly increases the number of resulting chunks. The more you have chunks, the more chances you have to encounter a failure on network transfer. With 25 chunks, the probability is low. With 4151 chunks the probability is really much higher. It worked when the server was my own computer, certainly because the network between client and server can't be simpler. But it failed when I transfered on a remote server.
Offline
With the chunk-file setting 500 000 it works fine to upload 12MB jpg files.
Thanks
My settings to 3000 was a missinterpretation of the problem.
danibo
Offline