We receive a question by e-mail about how one might read NDF files recorded with an Animal Location Tracker directly with a Python script. The only such importer we know of is part of the PyEcog2 package, which is currently available in the following GitHub repository:
This repository is maintained by Marco Leite of ION/UCL. As yet, there is no user-manual and no help page. The routine for importing into the PyEcog program works with recordings made by Octal Data Receivers (ODRs) and Telemetry Control Boxes (TCBs). So far as we know, it has not been configured for Animal Location Trackers (ALTs). The only difference between the messages recorded by these three receivers is the size of the payload attached to each message. The ODR has 0 bytes of payload, the TCB 2 bytes, and the ALT 16 bytes. Enhancing PyEcog for ALT data would be trivial, and we are happy to ask Marco to do so.
We describe the NDF format here:
We describe the telemetry message format here:
The code we use to reconstruct telemetry signals that have been affected by signal loss, interference, and message duplication are in the lwdaq_sct_receiver routine here:
The Neuroplayer.tcl, Neurorecorder.tcl, and Receiver.tcl files are all available on our GitHub repository, and in your own LWDAQ distribution.
Best Wishes, Kevan
reading NDF files in python
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Further questions from our customer.
"If I understand correctly the pyecog suite may not be suitable for loading
the NDF files recorded with the AL systems so we will refrain from
attempting to use it and employ another strategy."
A am confident that Marco Leite will be happy to add support for the ALT. I am going to send him a link to this discussion.
"Which language did you use to code the exporter function of neuroplayer? Would you be keen in sharing your code that we could use as a basis?"
I use Pascal for anything that has to run fast, see link in previous post.
"Indeed the exporter function of the neuroplayer is not very practical for
our use since when exporting multiple NDF files together. We need to do
this separately for different batches of a given recording. i.e. we record
an animal for one week, we export the data, in the meanwhile we keep
recording, and one week later we need to export the second week of
recording."
If I were you, I would export as you are recording, so the export file is always ready to use.
"The exporter function creates an overlap between the resulting EDF binary
files and this overlap is not always the same. Therefore when concatenating
signals from different batches we are not able to infer how much to cut on
each side of the files (the last one of batch 1 and the first one of batch
2)."
How big is the overlap?
How long are your individual NDF files?
Do you have the Synchronize button checked in the Neuroplayer when you record?
How long is the overlap? Is it seconds, minutes, or tens of minutes?
"Maybe we are missing something that already allows us to do so? Is this
overlap necessary for the conversion process?"
There is no overlap necessary in the conversion process. But there is always the issue of clocks running at different speeds. The ODR, TCB, and ALT all use a 1 ppm temperature-compensated clock, so it will be correct to +-0.6 s per week. But your computer clock will not be as accurate. If you have your computer connected to the Internet, and it is correcting its clock using Network Time, then your computer will drift by no more than +-5 s during the week. On average, the computer clock will be exactly correct.
How should we handle the disagreement between the computer and the telemetry receiver? There are two ways. One way is we trust the receiver clock, and use only the receiver clock. When recording one-hour NDF files, we record one hour of receiver data, start a new NDF file, and give the new file a timestamp that is 3600 s after the timestamp of the previous file. There is no overlap. There is no loss of data. After eight weeks, the recording may be wrong by 5 s. This is what happens if you un-check the Synchronize button in the Neurorecorder.
The second way is to synchronize the NDF files each time we make a new one. We reset the telemetry receiver clock so that it agrees with the computer clock, and begin a new recording at the start of a new computer clock second. We give the file a time stamp equal to the computer time. If the computer clock runs faster than the receiver clock, the NDF files names will be separated by more than 3600 s. If the computer clock runs slower, they will be separated by less than 3600 s. The sad thing about this method is: when we reset the telemetry receiver, we will lose some data, perhaps a half-second of data. This is what happens if you check the Synchronize button in the Neurorecorder.
When we have Animal Cage Cameras (ACCs) recording synchronous video, we have to check Synchronize so the telemetry receiver and the cameras can stay on the same clock as the computer. Otherwise, you can use the telemetry receiver as your reference clock.
Best Wishes, Kevan
"If I understand correctly the pyecog suite may not be suitable for loading
the NDF files recorded with the AL systems so we will refrain from
attempting to use it and employ another strategy."
A am confident that Marco Leite will be happy to add support for the ALT. I am going to send him a link to this discussion.
"Which language did you use to code the exporter function of neuroplayer? Would you be keen in sharing your code that we could use as a basis?"
I use Pascal for anything that has to run fast, see link in previous post.
"Indeed the exporter function of the neuroplayer is not very practical for
our use since when exporting multiple NDF files together. We need to do
this separately for different batches of a given recording. i.e. we record
an animal for one week, we export the data, in the meanwhile we keep
recording, and one week later we need to export the second week of
recording."
If I were you, I would export as you are recording, so the export file is always ready to use.
"The exporter function creates an overlap between the resulting EDF binary
files and this overlap is not always the same. Therefore when concatenating
signals from different batches we are not able to infer how much to cut on
each side of the files (the last one of batch 1 and the first one of batch
2)."
How big is the overlap?
How long are your individual NDF files?
Do you have the Synchronize button checked in the Neuroplayer when you record?
How long is the overlap? Is it seconds, minutes, or tens of minutes?
"Maybe we are missing something that already allows us to do so? Is this
overlap necessary for the conversion process?"
There is no overlap necessary in the conversion process. But there is always the issue of clocks running at different speeds. The ODR, TCB, and ALT all use a 1 ppm temperature-compensated clock, so it will be correct to +-0.6 s per week. But your computer clock will not be as accurate. If you have your computer connected to the Internet, and it is correcting its clock using Network Time, then your computer will drift by no more than +-5 s during the week. On average, the computer clock will be exactly correct.
How should we handle the disagreement between the computer and the telemetry receiver? There are two ways. One way is we trust the receiver clock, and use only the receiver clock. When recording one-hour NDF files, we record one hour of receiver data, start a new NDF file, and give the new file a timestamp that is 3600 s after the timestamp of the previous file. There is no overlap. There is no loss of data. After eight weeks, the recording may be wrong by 5 s. This is what happens if you un-check the Synchronize button in the Neurorecorder.
The second way is to synchronize the NDF files each time we make a new one. We reset the telemetry receiver clock so that it agrees with the computer clock, and begin a new recording at the start of a new computer clock second. We give the file a time stamp equal to the computer time. If the computer clock runs faster than the receiver clock, the NDF files names will be separated by more than 3600 s. If the computer clock runs slower, they will be separated by less than 3600 s. The sad thing about this method is: when we reset the telemetry receiver, we will lose some data, perhaps a half-second of data. This is what happens if you check the Synchronize button in the Neurorecorder.
When we have Animal Cage Cameras (ACCs) recording synchronous video, we have to check Synchronize so the telemetry receiver and the cameras can stay on the same clock as the computer. Otherwise, you can use the telemetry receiver as your reference clock.
Best Wishes, Kevan
-
- Posts: 4
- Joined: Fri Jul 12, 2024 4:00 am
Re: reading NDF files in python
Hello,
Thank you very much for your precious help.
We have carefully analyzed your suggestions and there are a few elements that we would like to clarify with you.
1) “If I were you, I would export as you are recording, so the export file is always ready to use.”
We were not aware that it was possible to export data in EDF format while recording. After checking the documentation, we found this could be very helpful for us. However, when we export data with Neuroplayer on the machine used for recording data with Neurorecorder, we often experience crashes during the export process.
The only reason we can think of that could cause these crashes is the limited computational capacity of the computer used to record the signals. Indeed CPU usage is at 97-100% when recording and exporting at the same time. Therefore, we aim to transfer the acquisition to a more powerful computer to record and export the signals simultaneously. Is there a specific internal extension card (or another specific component) we have to add to the new machine we are assembling?
2) “How big is the overlap? How long are your individual NDF files? Do you have the Synchronize button checked in Neuroplayer when you record? How long is the overlap? Is it seconds, minutes, or tens of minutes?”
Let me better explain our overlap problem with an example.
Imagine we want to export the successive NDF files “M1702291203.ndf” and “M1702294804.ndf” to EDF format and we have chosen a processing interval of 8 seconds in the Neuroplayer menu.
Each of these two files corresponds to a one-hour recording, which is well reflected by the Unix-Time index difference in the file names: 1702294804 - 1702294804 = 3601 seconds.
Once the conversion to EDF is complete:
- “M1702291203.ndf” has been converted to “E1702291203.edf”
- “M1702294804.ndf” has been converted to “E1702294795.edf”
Thus, the Unix-Time index difference in the EDF file names is no longer 3601 seconds but 1702294795 - 1702291203 = 3592 seconds. This results in an overlap of 3600 - 3592 = 8 seconds between the two recordings, with both EDF files having a duration of 3600 seconds.
Similarly, if we choose a processing interval of 2 seconds:
- “M1702291203.ndf” is converted to “E1702291203.edf”
- “M1702294804.ndf” is converted to “E1702294801.edf”
We therefore have an overlap of 3600 - (1702294801 - 1702291203) = 3600 - 3598 = 2 seconds. Thus, the overlap matches the duration of the processing interval.
Likewise, if we launch the successive export of “M1702291203.ndf”, “M1702294804.ndf”, and “M1702298405.ndf” with an 8-second processing interval:
- “M1702291203.ndf” is converted to “E1702291203.edf” (1)
- “M1702294804.ndf” is converted to “E1702294795.edf” (2)
- “M1702298405.ndf” is converted to “E1702298389.edf” (3)
Thus, we observe:
- A temporal shift of 8 seconds between “M1702294804.ndf” and “E1702294801.edf” (2)
- A temporal shift of 16 seconds between “M1702298405.ndf” and “E1702298389.edf” (3)
We can then hypothesize that: temporal shift = number of files * processing interval duration.
However, if we launch the export (still with an 8-second processing interval) of two batches of files separately, for instance:
- First of “M1702291203.ndf” and “M1702294804.ndf”
- And then of “M1702298405.ndf” and “M1702302006.ndf”
Then,
- “M1702291203.ndf” is converted to “E1702291203.edf” (1)
- “M1702294804.ndf” is converted to “E1702294795.edf” (2)
- “M1702298405.ndf” is converted to “E1702298405.edf” (3)
- “M1702302006.ndf” is converted to “E1702301998.edf” (4)
Thus, we observe:
- No temporal shift for the first exported files (1) and (3)
- An 8 seconds shift for the second exported files (2) and (4)
This is problematic for us because, if a crash occurs during an export, we have to reindex all the remaining files to be exported before starting their export.
Therefore, could you please let us know if this is normal and, if so, if it is possible to avoid this overlap?
Regarding the Synchronize button, we do check it when we record our data with Neurorecorder.
Best wishes,
Raphaël
Thank you very much for your precious help.
We have carefully analyzed your suggestions and there are a few elements that we would like to clarify with you.
1) “If I were you, I would export as you are recording, so the export file is always ready to use.”
We were not aware that it was possible to export data in EDF format while recording. After checking the documentation, we found this could be very helpful for us. However, when we export data with Neuroplayer on the machine used for recording data with Neurorecorder, we often experience crashes during the export process.
The only reason we can think of that could cause these crashes is the limited computational capacity of the computer used to record the signals. Indeed CPU usage is at 97-100% when recording and exporting at the same time. Therefore, we aim to transfer the acquisition to a more powerful computer to record and export the signals simultaneously. Is there a specific internal extension card (or another specific component) we have to add to the new machine we are assembling?
2) “How big is the overlap? How long are your individual NDF files? Do you have the Synchronize button checked in Neuroplayer when you record? How long is the overlap? Is it seconds, minutes, or tens of minutes?”
Let me better explain our overlap problem with an example.
Imagine we want to export the successive NDF files “M1702291203.ndf” and “M1702294804.ndf” to EDF format and we have chosen a processing interval of 8 seconds in the Neuroplayer menu.
Each of these two files corresponds to a one-hour recording, which is well reflected by the Unix-Time index difference in the file names: 1702294804 - 1702294804 = 3601 seconds.
Once the conversion to EDF is complete:
- “M1702291203.ndf” has been converted to “E1702291203.edf”
- “M1702294804.ndf” has been converted to “E1702294795.edf”
Thus, the Unix-Time index difference in the EDF file names is no longer 3601 seconds but 1702294795 - 1702291203 = 3592 seconds. This results in an overlap of 3600 - 3592 = 8 seconds between the two recordings, with both EDF files having a duration of 3600 seconds.
Similarly, if we choose a processing interval of 2 seconds:
- “M1702291203.ndf” is converted to “E1702291203.edf”
- “M1702294804.ndf” is converted to “E1702294801.edf”
We therefore have an overlap of 3600 - (1702294801 - 1702291203) = 3600 - 3598 = 2 seconds. Thus, the overlap matches the duration of the processing interval.
Likewise, if we launch the successive export of “M1702291203.ndf”, “M1702294804.ndf”, and “M1702298405.ndf” with an 8-second processing interval:
- “M1702291203.ndf” is converted to “E1702291203.edf” (1)
- “M1702294804.ndf” is converted to “E1702294795.edf” (2)
- “M1702298405.ndf” is converted to “E1702298389.edf” (3)
Thus, we observe:
- A temporal shift of 8 seconds between “M1702294804.ndf” and “E1702294801.edf” (2)
- A temporal shift of 16 seconds between “M1702298405.ndf” and “E1702298389.edf” (3)
We can then hypothesize that: temporal shift = number of files * processing interval duration.
However, if we launch the export (still with an 8-second processing interval) of two batches of files separately, for instance:
- First of “M1702291203.ndf” and “M1702294804.ndf”
- And then of “M1702298405.ndf” and “M1702302006.ndf”
Then,
- “M1702291203.ndf” is converted to “E1702291203.edf” (1)
- “M1702294804.ndf” is converted to “E1702294795.edf” (2)
- “M1702298405.ndf” is converted to “E1702298405.edf” (3)
- “M1702302006.ndf” is converted to “E1702301998.edf” (4)
Thus, we observe:
- No temporal shift for the first exported files (1) and (3)
- An 8 seconds shift for the second exported files (2) and (4)
This is problematic for us because, if a crash occurs during an export, we have to reindex all the remaining files to be exported before starting their export.
Therefore, could you please let us know if this is normal and, if so, if it is possible to avoid this overlap?
Regarding the Synchronize button, we do check it when we record our data with Neurorecorder.
Best wishes,
Raphaël
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Dear Raphael,
> We were not aware that it was possible to export data in EDF format while recording. After checking the documentation,
> we found this could be very helpful for us.
Good.
> However, when we export data with Neuroplayer on the machine used for recording data with Neurorecorder, we often
> experience crashes during the export process.
When you say "crash", does the Neuroplayer quit and disappear? Or does it freeze so that it will not respond to button presses? Or does it stop with an error message in its text window? If there is an error message, what is the message?
The Neuroplayer should never quit or freeze during the export process. If it does, there is a bug in my code that I have to fix. So long as the computer operating system keeps running, the Neuroplayer should keep running or stop and give you an error message.
> The only reason we can think of that could cause these crashes is the limited computational capacity of the computer
> used to record the signals.
Limited computational capacity should not cause a crash. I think it is more likely that there is a bug in my exporter code.
> Indeed CPU usage is at 97-100% when recording and exporting at the same time.
In that case, the Neuroplayer will not be able to export as fast as the data is recorded. But the Neuroplayer should not crash.
> Therefore, we aim to transfer the acquisition to a more powerful computer to record and export the signals
> simultaneously.
There is no harm in using a faster computer. But if it's a bug in my code, the bug will occur again. If the Neuroplayer crashes, please send me the NDF file it was exporting when it crashed. I will then be able to fix the problem, and you can export during recording.
> Is there a specific internal extension card (or another specific component) we have to add to the new machine we are
> assembling?
No. The Neuroplayer uses standard CPU instructions.
I'm going to study your answers to my synchronization questions now.
Best Wishes, Kevan
> We were not aware that it was possible to export data in EDF format while recording. After checking the documentation,
> we found this could be very helpful for us.
Good.
> However, when we export data with Neuroplayer on the machine used for recording data with Neurorecorder, we often
> experience crashes during the export process.
When you say "crash", does the Neuroplayer quit and disappear? Or does it freeze so that it will not respond to button presses? Or does it stop with an error message in its text window? If there is an error message, what is the message?
The Neuroplayer should never quit or freeze during the export process. If it does, there is a bug in my code that I have to fix. So long as the computer operating system keeps running, the Neuroplayer should keep running or stop and give you an error message.
> The only reason we can think of that could cause these crashes is the limited computational capacity of the computer
> used to record the signals.
Limited computational capacity should not cause a crash. I think it is more likely that there is a bug in my exporter code.
> Indeed CPU usage is at 97-100% when recording and exporting at the same time.
In that case, the Neuroplayer will not be able to export as fast as the data is recorded. But the Neuroplayer should not crash.
> Therefore, we aim to transfer the acquisition to a more powerful computer to record and export the signals
> simultaneously.
There is no harm in using a faster computer. But if it's a bug in my code, the bug will occur again. If the Neuroplayer crashes, please send me the NDF file it was exporting when it crashed. I will then be able to fix the problem, and you can export during recording.
> Is there a specific internal extension card (or another specific component) we have to add to the new machine we are
> assembling?
No. The Neuroplayer uses standard CPU instructions.
I'm going to study your answers to my synchronization questions now.
Best Wishes, Kevan
-
- Posts: 4
- Joined: Fri Jul 12, 2024 4:00 am
Re: reading NDF files in python
Dear Kevan,
Thank you for your rapid response.
Allow me to clarify a few elements.
"When you say "crash," does the Neuroplayer quit and disappear? Or does it freeze so that it will not respond to button presses? Or does it stop with an error message in its text window? If there is an error message, what is the message?"
What I mean by “crash” is that the Neuroplayer shuts down without returning any error message. It basically quits and disappears, as you say.
Here is some information that might be useful:
1) When the Neuroplayer crashes, the Neurorecorder doesn’t seem to be affected and keeps recording.
2) We have attempted exporting the same folder multiple times, and it is always the same files that cause the Neuroplayer to crash. I will send you these files by mail.
3) Visualizing the files responsible for the crashes with the Neuroplayer (using the “Play” button) poses no issues. It is only their export that causes the software to crash.
I hope this information is helpful to you.
Best wishes,
Raphaël Nunes da Silva
Thank you for your rapid response.
Allow me to clarify a few elements.
"When you say "crash," does the Neuroplayer quit and disappear? Or does it freeze so that it will not respond to button presses? Or does it stop with an error message in its text window? If there is an error message, what is the message?"
What I mean by “crash” is that the Neuroplayer shuts down without returning any error message. It basically quits and disappears, as you say.
Here is some information that might be useful:
1) When the Neuroplayer crashes, the Neurorecorder doesn’t seem to be affected and keeps recording.
2) We have attempted exporting the same folder multiple times, and it is always the same files that cause the Neuroplayer to crash. I will send you these files by mail.
3) Visualizing the files responsible for the crashes with the Neuroplayer (using the “Play” button) poses no issues. It is only their export that causes the software to crash.
I hope this information is helpful to you.
Best wishes,
Raphaël Nunes da Silva
Re: reading NDF files in python
Hi Kevan,
quick detail to add to Raphaël post:
when the neuroplayer crashes during the exporting it disappears with no error message.
Also, even if it's reassuring to know that even a machine running at 98% of the CPU should not prevent the good functioning of your software I would like to upgrade the acquisition machine to a more powerful model.
Could you please confirm to me that the only hardware requirement for the machine is a supplementary network port ?
Can we use a linux OS instead of windows.
Many thanks again for your precisous help!
Marco
quick detail to add to Raphaël post:
when the neuroplayer crashes during the exporting it disappears with no error message.
Also, even if it's reassuring to know that even a machine running at 98% of the CPU should not prevent the good functioning of your software I would like to upgrade the acquisition machine to a more powerful model.
Could you please confirm to me that the only hardware requirement for the machine is a supplementary network port ?
Can we use a linux OS instead of windows.
Many thanks again for your precisous help!
Marco
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Dear Marco,
> Could you please confirm to me that the only hardware requirement for the machine is a
> supplementary network port?
Correct.
> Can we use a linux OS instead of windows.
Yes, you can use Linux, MacOS, or Windows. To run on Linux you will probably start with "./lwdaq" from the terminal, see here:
Best Wishes, Kevan
> Could you please confirm to me that the only hardware requirement for the machine is a
> supplementary network port?
Correct.
> Can we use a linux OS instead of windows.
Yes, you can use Linux, MacOS, or Windows. To run on Linux you will probably start with "./lwdaq" from the terminal, see here:
Best Wishes, Kevan
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Dear Raphael,
> What I mean by “crash” is that the Neuroplayer shuts down without returning any error message.
> It basically quits and disappears, as you say.
Okay, thank you for clarifying. That's a real crash, and that should never happen. It's a bug in my code.
> When the Neuroplayer crashes, the Neurorecorder doesn’t seem to be affected and keeps recording.
Good. They are separate processes, so I expect the Neurorecorder to keep running.
> We have attempted exporting the same folder multiple times, and it is always the same files that cause the Neuroplayer to crash.
I am glad to hear that: I will be able to reproduce the problem and fix the bug.
> I will send you these files by mail.
I have them, and will look at them today.
> Visualizing the files responsible for the crashes with the Neuroplayer (using the “Play” button) poses no issues.
> It is only their export that causes the software to crash.
That is interesting. Thank you. You will hear from me soon.
Best Wishes, Kevan
> What I mean by “crash” is that the Neuroplayer shuts down without returning any error message.
> It basically quits and disappears, as you say.
Okay, thank you for clarifying. That's a real crash, and that should never happen. It's a bug in my code.
> When the Neuroplayer crashes, the Neurorecorder doesn’t seem to be affected and keeps recording.
Good. They are separate processes, so I expect the Neurorecorder to keep running.
> We have attempted exporting the same folder multiple times, and it is always the same files that cause the Neuroplayer to crash.
I am glad to hear that: I will be able to reproduce the problem and fix the bug.
> I will send you these files by mail.
I have them, and will look at them today.
> Visualizing the files responsible for the crashes with the Neuroplayer (using the “Play” button) poses no issues.
> It is only their export that causes the software to crash.
That is interesting. Thank you. You will hear from me soon.
Best Wishes, Kevan
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Dear Raphael,
Thank you for your detailed explanation of the problems with overlap between export files.
> Once the conversion to EDF is complete:
> “M1702291203.ndf” has been converted to “E1702291203.edf”
> “M1702294804.ndf” has been converted to “E1702294795.edf”
This does not look right to me. That's not what I wanted the Exporter to do. There may be something unusual about your NDF file that is causing the Exporter to behave badly. I will start by fixing the bug in the Exporter that causes the Exporter to crash the Neuroplayer. After that, I will look at why the Exporter is creating these overlaps.
Right now, I think that both the problems you are having with the Exporter are due to problems in our Exporter code. Which is good news, in a way, because we can fix problems in our code. In the meantime, thank you for your patience, and thank you for answering all my questions.
Best Wishes, Kevan
Thank you for your detailed explanation of the problems with overlap between export files.
> Once the conversion to EDF is complete:
> “M1702291203.ndf” has been converted to “E1702291203.edf”
> “M1702294804.ndf” has been converted to “E1702294795.edf”
This does not look right to me. That's not what I wanted the Exporter to do. There may be something unusual about your NDF file that is causing the Exporter to behave badly. I will start by fixing the bug in the Exporter that causes the Exporter to crash the Neuroplayer. After that, I will look at why the Exporter is creating these overlaps.
Right now, I think that both the problems you are having with the Exporter are due to problems in our Exporter code. Which is good news, in a way, because we can fix problems in our code. In the meantime, thank you for your patience, and thank you for answering all my questions.
Best Wishes, Kevan
-
- Site Admin
- Posts: 72
- Joined: Fri Nov 11, 2022 1:21 pm
Re: reading NDF files in python
Dear Raphael,
I have just exported all four of the files you sent me, no crash or freeze or stop, no error message. I'm using LWDAQ 10.6.10, Neuroplayer 170. The metadata in your NDF files says, "Creator: Neurorecorder 162, LWDAQ_10.5.2." The LWDAQ GitLog has the following entry just before the release of LWDAQ 10.5.4:
commit a188b7552d0c1ed4f3fd01708d6eb1b623f918f6
Author: Kevan Hashemi <hashemi@opensourceinstruments.com>
Date: Mon Feb 27 17:32:23 2023 -0500
Fixed Neuroexporter freeze bug in Windows.
A "freeze" bug is different from a "crash" bug. But we did a lot of work on the Neuroexporter between 10.5.3 and 10.5.4. I suggest you try upgrading your LWDAQ. If you are using our GitHub repository (link below), just do "git pull" to get the latest pre-release version.
If you install using our multi-platform ZIP archive, use this link to get LWDAQ 10.6.10:
Please try the new version and see if it can export without stopping.
Best Wishes, Kevan
I have just exported all four of the files you sent me, no crash or freeze or stop, no error message. I'm using LWDAQ 10.6.10, Neuroplayer 170. The metadata in your NDF files says, "Creator: Neurorecorder 162, LWDAQ_10.5.2." The LWDAQ GitLog has the following entry just before the release of LWDAQ 10.5.4:
commit a188b7552d0c1ed4f3fd01708d6eb1b623f918f6
Author: Kevan Hashemi <hashemi@opensourceinstruments.com>
Date: Mon Feb 27 17:32:23 2023 -0500
Fixed Neuroexporter freeze bug in Windows.
A "freeze" bug is different from a "crash" bug. But we did a lot of work on the Neuroexporter between 10.5.3 and 10.5.4. I suggest you try upgrading your LWDAQ. If you are using our GitHub repository (link below), just do "git pull" to get the latest pre-release version.
If you install using our multi-platform ZIP archive, use this link to get LWDAQ 10.6.10:
Please try the new version and see if it can export without stopping.
Best Wishes, Kevan
Who is online
Users browsing this forum: No registered users and 2 guests