okay. sorry aboout all this craziness! this is kind of “haydxn’s madness blog”…
it turns out i’ve only just come across this weird behaviour since trying the XmlDocument::getDocumentElement(true); method, testing the outer element first. Looking thru the source, i can see why i find this 8192 ‘magic number’ :hihi: … it seems that it ‘remembers’ to only read up to 8192 chars of the file.
This works fine…
XmlElement* configData = configDoc.getDocumentElement();
if ( configData )
{
return configData;
}
THIS, on the other hand…
XmlElement* configData = configDoc.getDocumentElement(true);
if ( configData )
{
if ( configData->hasTagName( configRootTagName ) )
{
// root tag is correct, so get whole tag...
delete configData; // get rid of outer element...
configData = configDoc.getDocumentElement();
DBG( configDoc.getLastParseError() );
return configData;
}
}
… always manages to run out of data after 8192 bytes of processing in the XmlDocument. Am i doing something very wrong here? Am i supposed to create a new XmlDocument object to do the full scan?
for example…
XmlElement* configData = configDoc.getDocumentElement(true);
if ( configData )
{
if ( configData->hasTagName( configRootTagName ) )
{
// root tag is correct, so get whole tag...
delete configData; // get rid of outer element...
XmlDocument auxDoc( sameFileAsBefore );
configData = auxDoc.getDocumentElement();
DBG( auxDoc.getLastParseError() );
return configData;
}
}
… this works!
i’d have thought it would be okay to just do the scan again with the default ‘false’ setting, yet somehow it keeps to only a maximum of 8192 characters (which i see is the max number of characters read in ‘true’ mode)…
shoot me in the face for being such a retard up til now…
but is this indeed a bug, or does my code show that i’m even more stupid than i’ve just realised?