This video is brilliant - it shows how you can send live video data 10km using simple ESP32 microcontrollers. Amazing.
If you are looking at submitting a patent application, you probably need to create a line drawing version of your work. This site has a great walkthrough guide:
Click this link to go to the website which walks you through it.
For a 3D model the easiest method I found was to just create a couple of versions of the model (eg. collapsed and expanded), then capture some screenshots of them from different perspectives. Then put all the screenshots into 1 "big picture", and follow that guide.
I output the serial data over bluetooth with a simple header in the same line. For example it would look like:
If I was sending data from 2 sensors that have different sample rates. Then to convert the serial data into something useable you can use contains text and the text segment method to categorise the individual sensor data. Works great! Also in AI any numbers in the serial string are treated as numbers so it is very easy to work with.
The right side of this image is great to use as a template for serial data over bluetooth: https://community.appinventor.mit.edu/uploads/default/original/2X/3/3183d1a513784e75c4188f76ab484510f6c65dd6.png
But you also need to allow permissions on Android 12. See the image below on how to do it.
Using two strain gauges to detect force
Using two gauges instead of just one gives you much less noise and a better quality signal.
This is from this forum link and works great:
"One white and one black (from different strain gauges) should be E+ and then the remaining white and black to E-; your two red wires should go to A+ and A-."
Dependent on which red you connect to the A+ you will get negative or positive values. You can either swap to get the one you want or just negate the data.
The omconvert program is a great way to convert the raw binary file into a wav file which can be analysed much more efficiently than a .csv or text file. A few key things for me are:
1. Make sure to use the latest version, as older versions do not support the gyro data. This is particularly problematic because they still process it and give numbers but they are wrong. Yes - I've wasted hours before I've realised this.... The location for the file is github:
openmovement/Software/OM/Plugins for release/Plugins/OmConvertPlugin/
Deeplabcut is one heck of a program. The GUI is pretty easy to use once you have the hang of it, and it seems very powerful. The how-to guides aren't great unfortunately, but the introductory video on using the GUI is a good step by step intro.
For me I download and install it all, then open Anaconda 3 CMD with admin rights and type:
python -m deeplabcut
Then it's the GUI. Some things they don't mention which are important are:
1. If it looks like nothing is happening after you click a button sometimes you have to go back to the terminal window and press the "Enter" key.
2. It's best to change the directory to C:\Deeplabcut or something like that in the CMD window and run everything there. Otherwise it can get a bit confusing where things are.
3. Make sure to install the correct (i.e. old) version of wxpython
pip install -U wxPython==4.0.7.post2
Absolutely brilliant work though otherwise.
Efficient Pose looks very interesting - it seems to be fast and can generate decent frame rates in "real time". I'm not confident in the accuracy claims compared to OpenPose though, and given that it can only track a single person and is nowhere near as established as Openpose it may only have a niche usefulness. Worth a look though:
I need to check out the more advanced models on pre-recorded data, I've only played with the real-time options and frozen it up trying to run model IV.
I found the installation to be a little tricky, using the method on the git page didn't work for me and I had to do a fresh install of the different packages. It can only run in Python 3.6 it seems, anything earlier or later is not compatible with the required tensorflow etc.
Essentially I just opened the "requirements.txt" file and did a pip install of each one individually. For the Torch I had to remove the edition number to make it work.
The Openpose demo is great:
The following is relevant to my computer. It’s now very simple to test and save data from openpose.
A description of the json file format is here:
**To close it you need to close the powershell itself.
Ninjaflex might be my favourite 3D filament to print with. It is quite amazing what you can create out of this highly flexible material. We're using it to create ankle bands for the Axivity AX3 accelerometer (there's a how to guide up here for it), but you need to get your settings just right or it turns into a mess of strings. My tips for printing on a Lulzbot Taz 6 with Aerostruder are:
1. Slow the print and travel speeds right down, the slower the better. I print at 15mm/s and travel at 200mm/s.
2. Retract the filament A LOT. I use 5mm retraction, any less can cause issues.
3. I only print it with a 0.15mm layer height. Any higher and I find it can cause issues with print fails.
4. Reduce the temperature. I use 215 degrees Celsius. The default temp is too high and causes stringing for me.
I've attached the curaprofile here.
Another important thing to do - for me at least - is to make sure to do at least 1 cold pull between prints. I find that if the ninjaflex cools down in the head it causes it too jam. I heat the head up to 260 degrees, pulling the ninjaflex out at around 210 degrees and extruding PLA through it at 260 for a second or two and then starting the cooldown to 70 degrees. This is annoying but now I don't get any more jams or problem prints.
Intensity Charts, Heat Maps and 2D Histograms - Easy to interpret or a minefield of potential misinformation?
Preparing a data analysis program for accelerometer data collected in people living with stroke my task is to analyse raw data collected from two wrist worn monitors, once on each arm. One of my collaborators was keen to replicate the 2D histogram methods in this paper:
Hayward, K. S., Eng, J. J., Boyd, L. A., Lakhani, B., Bernhardt, J., & Lang, C. E. (2016). Exploring the Role of Accelerometers in the Measurement of Real World Upper-Limb Use After Stroke. Brain Impairment, 17(1).
It seemed quite easy so I went ahead with it. However, I soon realised the problems with presenting data in this way when it is so heavily skewed. Take note of these two graphs, both of the exact same data with the only difference being the Z-axis colour bar scale and a zoom in on the Y-axis. By changing the Z-scale it makes the data look completely different, skewing in either direction depending on the settings. In the top graph it looks like vast majority of the time the person favours their left hand for movement, and that much of this occurs at bilateral magnitudes of greater than 0.05 g's. The bottom graph however shows the opposite, in that the majority of the time the person favours the right hand for movement and that almost none of the movements occur at greater than 0.05 g's. So which one is correct?
Without the 1D histogram information provided for each of the Y- and X-scale variables I now realise that these intensity charts / heat maps / 2D histograms can be a trap. The image below shows the 2D histogram with the 1D histograms for each of the axes aligned with it. This helps to tell the tale more truthfully, but adds another layer of complexity with respect to interpretation.
These graphs looked simple at first glance, but as often happens with research this is not always the case. There is the potential to use a log transformed Z-scale to overcome some of this problem, but this adds in another issue in that changes in colour gradient will then be non-linear as well and therefore even more difficult to interpret.