that worked for me but now the files can’t be opened after the save, i also checked using linux zcat command and it says the data is not in gzipped format. Is there something wrong with what i’m doing ?
No, I don’t think I’ve changed anything, and I’ve been using the class recently for creating zip files, so am pretty sure it works… Have you tried reading back the result with a GZIPDecompressorInputStream?
void testGzip()
{
File compressedFile("c:\\File.gz");
ValueTree test("myTree");
Array <File> tempFiles;
MemoryOutputStream testBinData;
GZIPCompressorOutputStream gzipOutputStream(&testBinData);
File tempDir = File::getSpecialLocation(File::tempDirectory);
tempDir.findChildFiles (tempFiles, File::findFilesAndDirectories, true, "*.*");
for (int i=0; i<tempFiles.size(); i++)
{
ValueTree child("file");
child.setProperty ("path", tempFiles[i].getFullPathName(), 0);
child.setProperty ("hash", tempFiles[i].hashCode64(), 0);
test.addChild (child, -1, 0);
}
ScopedPointer <XmlElement> testXml (test.createXml());
if (testXml)
{
Logger::writeToLog ("testXml is valid, tree size: " + String(testXml->createDocument("").length()));
testXml->writeToStream(gzipOutputStream,"");
}
gzipOutputStream.flush();
Logger::writeToLog ("data in memory output stream="+String(testBinData.getDataSize()));
compressedFile.replaceWithData (testBinData.getData(), testBinData.getDataSize());
Logger::writeToLog ("file size after write="+String(compressedFile.getSize()));
/* read the data */
ScopedPointer <FileInputStream> fz(compressedFile.createInputStream());
if (fz)
{
GZIPDecompressorInputStream gzInput(fz, false);
MemoryBlock decompressedData(8192, true);
MemoryBlock buf(8192);
while (!gzInput.isExhausted())
{
const int data = gzInput.read (buf.getData(), 8192);
Logger::writeToLog ("read "+String(data) + " bytes from compressed stream");
decompressedData.ensureSize (decompressedData.getSize()+data, false);
decompressedData.append(buf.getData(), data);
}
String xmlData = decompressedData.toString();
Logger::writeToLog ("decompressed data size="+String(decompressedData.getSize()));
Logger::writeToLog ("decompressed document length="+String(xmlData.length()));
}
}
the output is:
testXml is valid, tree size: 51154
data in memory output stream=7776
file size after write=7776
read 8192 bytes from compressed stream
read 8192 bytes from compressed stream
read 8192 bytes from compressed stream
read 8192 bytes from compressed stream
read 8192 bytes from compressed stream
read 6819 bytes from compressed stream
decompressed data size=103750
decompressed document length=0
it’s 655 before compression and 619 after, and even looking at the debug output it’s different (i think tabs are removed), but remember no XML parsing is done after decompression so the strings should be identical (i think).
Also i noticed why my actual code is not working, after decompression my XML documents look ok, however they are missing the “<” in the “<?xml” (the first character is gone after decompression), and the XML parser fails.
I’m not sure what you’re doing wrong, your code’s a bit spaghetti-ish and I can’t really see what’s going on, but there must be something silly happening in there. Try cleaning it up and restructuring it, and my guess is that you’ll find a mistake.
Just to explain myself (everytime i hit a problem like that i feel like a complete asshole for crying about it, sorry for that)
Anyway i used the memoryBlock stuff to display some progress for large files.
I fixed my problem, it was due to scope issues, i never though about putting all those streams in special scopes “{}”, and i was getting incomplete results.
Thank you for showing that to me.
Actually, while doing some other work today I spotted a bug in the gzip class that might actually be the real cause of what you’re seeing there.
And it’s a really nasty one to fix, because there’s no perfect solution… Basically, in the current code, when you call flush() on a gzip stream, it finishes the zip operation, which performs some special magic and closes the stream - that means that any subsequent attempts to write to the stream will fail. The bug was that if this happened, the code wasn’t asserting, so it would go unnoticed.
My first reaction was that I should just make flush() do nothing, and close the stream in the destructor instead. But then, the finished result is invalid until the stream object is actually deleted - and I bet that a lot of people will have code that calls flush() and then tries to use the result before they’ve bothered to delete the stream object.
So, I guess the best solution here is to leave flush() as it currently works, but make sure there’s an assertion if you try to write to it afterwards (despite this being a reasonable thing to want to do). Annoying that there isn’t a fix for this that works in all cases!