1<!-- mdformat off(b/169948621#comment2) -->
2
3# Micro Speech Example
4
5This example shows how to run a 20 kB model that can recognize 2 keywords,
6"yes" and "no", from speech data.
7
8The application listens to its surroundings with a microphone and indicates
9when it has detected a word by lighting an LED or displaying data on a
10screen, depending on the capabilities of the device.
11
12![Animation on Arduino](images/animation_on_arduino.gif)
13
14The code has a small footprint (for example, around 22 kilobytes on a Cortex
15M3) and only uses about 10 kilobytes of RAM for working memory, so it's able to
16run on systems like an STM32F103 with only 20 kilobytes of total SRAM and 64
17kilobytes of Flash.
18
19## Table of contents
20
21-   [Deploy to ARC EM SDP](#deploy-to-arc-em-sdp)
22-   [Deploy to Arduino](#deploy-to-arduino)
23-   [Deploy to ESP32](#deploy-to-esp32)
24-   [Deploy to SparkFun Edge](#deploy-to-sparkfun-edge)
25-   [Deploy to STM32F746](#deploy-to-STM32F746)
26-   [Deploy to NXP FRDM K66F](#deploy-to-nxp-frdm-k66f)
27-   [Deploy to HIMAX WE1 EVB](#deploy-to-himax-we1-evb)
28-   [Deploy to CEVA BX1/SP500](#deploy-to-ceva-bx1)
29-   [Run on macOS](#run-on-macos)
30-   [Run the tests on a development machine](#run-the-tests-on-a-development-machine)
31-   [Train your own model](#train-your-own-model)
32
33## Deploy to ARC EM SDP
34
35The following instructions will help you to build and deploy this example to
36[ARC EM SDP](https://www.synopsys.com/dw/ipdir.php?ds=arc-em-software-development-platform)
37board. General information and instructions on using the board with TensorFlow
38Lite Micro can be found in the common
39[ARC targets description](/tensorflow/lite/micro/tools/make/targets/arc/README.md).
40
41This example uses asymmetric int8 quantization and can therefore leverage
42optimized int8 kernels from the embARC MLI library
43
44The ARC EM SDP board contains a rich set of extension interfaces. You can choose
45any compatible microphone and modify
46[audio_provider.cc](/tensorflow/lite/micro/examples/micro_speech/audio_provider.cc)
47file accordingly to use input from your specific microphone. By default, results
48of running this example are printed to the console. If you would like to instead
49implement some target-specific actions, you need to modify
50[command_responder.cc](/tensorflow/lite/micro/examples/micro_speech/command_responder.cc)
51accordingly.
52
53The reference implementations of these files are used by default on the EM SDP.
54
55### Initial setup
56
57Follow the instructions on the
58[ARC EM SDP Initial Setup](/tensorflow/lite/micro/tools/make/targets/arc/README.md#ARC-EM-Software-Development-Platform-ARC-EM-SDP)
59to get and install all required tools for work with ARC EM SDP.
60
61### Generate Example Project
62
63As default example doesn’t provide any output without real audio, it is
64recommended to get started with example for mock data. The project for ARC EM
65SDP platform can be generated with the following command:
66
67```
68make -f tensorflow/lite/micro/tools/make/Makefile \
69TARGET=arc_emsdp ARC_TAGS=reduce_codesize  \
70OPTIMIZED_KERNEL_DIR=arc_mli \
71generate_micro_speech_mock_make_project
72```
73
74Note that `ARC_TAGS=reduce_codesize` applies example specific changes of code to
75reduce total size of application. It can be omitted.
76
77### Build and Run Example
78
79For more detailed information on building and running examples see the
80appropriate sections of general descriptions of the
81[ARC EM SDP usage with TFLM](/tensorflow/lite/micro/tools/make/targets/arc/README.md#ARC-EM-Software-Development-Platform-ARC-EM-SDP).
82In the directory with generated project you can also find a
83*README_ARC_EMSDP.md* file with instructions and options on building and
84running. Here we only briefly mention main steps which are typically enough to
85get it started.
86
871.  You need to
88    [connect the board](/tensorflow/lite/micro/tools/make/targets/arc/README.md#connect-the-board)
89    and open an serial connection.
90
912.  Go to the generated example project director
92
93    ```
94    cd tensorflow/lite/micro/tools/make/gen/arc_emsdp_arc/prj/micro_speech_mock/make
95    ```
96
973.  Build the example using
98
99    ```
100    make app
101    ```
102
1034.  To generate artefacts for self-boot of example from the board use
104
105    ```
106    make flash
107    ```
108
1095.  To run application from the board using microSD card:
110
111    *   Copy the content of the created /bin folder into the root of microSD
112        card. Note that the card must be formatted as FAT32 with default cluster
113        size (but less than 32 Kbytes)
114    *   Plug in the microSD card into the J11 connector.
115    *   Push the RST button. If a red LED is lit beside RST button, push the CFG
116        button.
117    *   Type or copy next commands one-by-another into serial terminal: `setenv
118        loadaddr 0x10800000 setenv bootfile app.elf setenv bootdelay 1 setenv
119        bootcmd fatload mmc 0 \$\{loadaddr\} \$\{bootfile\} \&\& bootelf
120        saveenv`
121    *   Push the RST button.
122
1236.  If you have the MetaWare Debugger installed in your environment:
124
125    *   To run application from the console using it type `make run`.
126    *   To stop the execution type `Ctrl+C` in the console several times.
127
128In both cases (step 5 and 6) you will see the application output in the serial
129terminal.
130
131## Deploy to Arduino
132
133The following instructions will help you build and deploy this example to
134[Arduino](https://www.arduino.cc/) devices.
135
136The example has been tested with the following devices:
137
138- [Arduino Nano 33 BLE Sense](https://store.arduino.cc/usa/nano-33-ble-sense-with-headers)
139
140The Arduino Nano 33 BLE Sense is currently the only Arduino with a built-in
141microphone. If you're using a different Arduino board and attaching your own
142microphone, you'll need to implement your own +audio_provider.cc+. It also has a
143built-in LED, which is used to indicate that a word has been recognized.
144
145### Install the Arduino_TensorFlowLite library
146
147This example application is included as part of the official TensorFlow Lite
148Arduino library. To install it, open the Arduino library manager in
149`Tools -> Manage Libraries...` and search for `Arduino_TensorFlowLite`.
150
151### Load and run the example
152
153Once the library has been added, go to `File -> Examples`. You should see an
154example near the bottom of the list named `TensorFlowLite:micro_speech`. Select
155it and click `micro_speech` to load the example.
156
157Use the Arduino IDE to build and upload the example. Once it is running, you
158should see the built-in LED on your device flashing. Saying the word "yes" will
159cause the LED to remain on for 3 seconds. The current model has fairly low
160accuracy, so you may have to repeat "yes" a few times.
161
162The program also outputs inference results to the serial port, which appear as
163follows:
164
165```
166Heard yes (201) @4056ms
167Heard no (205) @6448ms
168Heard unknown (201) @13696ms
169Heard yes (205) @15000ms
170```
171
172The number after each detected word is its score. By default, the program only
173considers matches as valid if their score is over 200, so all of the scores you
174see will be at least 200.
175
176When the program is run, it waits 5 seconds for a USB-serial connection to be
177available. If there is no connection available, it will not output data. To see
178the serial output in the Arduino desktop IDE, do the following:
179
1801. Open the Arduino IDE
1811. Connect the Arduino board to your computer via USB
1821. Press the reset button on the Arduino board
1831. Within 5 seconds, go to `Tools -> Serial Monitor` in the Arduino IDE. You may
184   have to try several times, since the board will take a moment to connect.
185
186If you don't see any output, repeat the process again.
187
188## Deploy to ESP32
189
190The following instructions will help you build and deploy this example to
191[ESP32](https://www.espressif.com/en/products/hardware/esp32/overview) devices
192using the [ESP IDF](https://github.com/espressif/esp-idf).
193
194The example has been tested on ESP-IDF version 4.0 with the following devices: -
195[ESP32-DevKitC](http://esp-idf.readthedocs.io/en/latest/get-started/get-started-devkitc.html) -
196[ESP-EYE](https://github.com/espressif/esp-who/blob/master/docs/en/get-started/ESP-EYE_Getting_Started_Guide.md)
197
198ESP-EYE is a board which has a built-in microphone which can be used to run this
199example , if you want to use other esp boards you will have to connect
200microphone externally and write your own
201[audio_provider.cc](esp/audio_provider.cc).
202You can also edit the
203[command_responder.cc](command_responder.cc)
204to define your own actions after detecting command.
205
206### Install the ESP IDF
207
208Follow the instructions of the
209[ESP-IDF get started guide](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html)
210to setup the toolchain and the ESP-IDF itself.
211
212The next steps assume that the
213[IDF environment variables are set](https://docs.espressif.com/projects/esp-idf/en/latest/get-started/index.html#step-4-set-up-the-environment-variables) :
214
215*   The `IDF_PATH` environment variable is set
216*   `idf.py` and Xtensa-esp32 tools (e.g. `xtensa-esp32-elf-gcc`) are in `$PATH`
217
218### Generate the examples
219
220The example project can be generated with the following command:
221```
222make -f tensorflow/lite/micro/tools/make/Makefile TARGET=esp generate_micro_speech_esp_project
223```
224
225### Building the example
226
227Go to the example project directory
228```
229cd tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32/prj/micro_speech/esp-idf
230```
231
232Then build with `idf.py` `idf.py build`
233
234### Load and run the example
235
236To flash (replace `/dev/ttyUSB0` with the device serial port):
237```
238idf.py --port /dev/ttyUSB0 flash
239```
240
241Monitor the serial output:
242```idf.py --port /dev/ttyUSB0 monitor```
243
244Use `Ctrl+]` to exit.
245
246The previous two commands can be combined:
247```
248idf.py --port /dev/ttyUSB0 flash monitor
249```
250
251## Deploy to SparkFun Edge
252
253The following instructions will help you build and deploy this example on the
254[SparkFun Edge development board](https://sparkfun.com/products/15170).
255
256The program will toggle the blue LED on and off with each inference. It will
257switch on the yellow LED when a "yes" is heard, the red LED when a "no" is
258heard, and the green LED when an unknown command is heard.
259
260The [AI on a microcontroller with TensorFlow Lite and SparkFun Edge](https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow)
261walks through the deployment process in detail. The steps are also
262summarized below.
263
264### Compile the binary
265
266The following command will download the required dependencies and then compile a
267binary for the SparkFun Edge:
268
269```
270make -f tensorflow/lite/micro/tools/make/Makefile TARGET=sparkfun_edge TAGS="cmsis_nn" micro_speech_bin
271```
272
273The binary will be created in the following location:
274
275```
276tensorflow/lite/micro/tools/make/gen/sparkfun_edge_cortex-m4/bin/micro_speech.bin
277```
278
279### Sign the binary
280
281The binary must be signed with cryptographic keys to be deployed to the device.
282We'll now run some commands that will sign our binary so it can be flashed to
283the SparkFun Edge. The scripts we are using come from the Ambiq SDK, which is
284downloaded when the `Makefile` is run.
285
286Enter the following command to set up some dummy cryptographic keys we can use
287for development:
288
289```
290cp tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/keys_info0.py \
291tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/keys_info.py
292```
293
294Next, run the following command to create a signed binary:
295
296```
297python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/create_cust_image_blob.py \
298--bin tensorflow/lite/micro/tools/make/gen/sparkfun_edge_cortex-m4/bin/micro_speech.bin \
299--load-address 0xC000 \
300--magic-num 0xCB \
301-o main_nonsecure_ota \
302--version 0x0
303```
304
305This will create the file `main_nonsecure_ota.bin`. We'll now run another
306command to create a final version of the file that can be used to flash our
307device with the bootloader script we will use in the next step:
308
309```
310python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/create_cust_wireupdate_blob.py \
311--load-address 0x20000 \
312--bin main_nonsecure_ota.bin \
313-i 6 \
314-o main_nonsecure_wire \
315--options 0x1
316```
317
318You should now have a file called `main_nonsecure_wire.bin` in the directory
319where you ran the commands. This is the file we'll be flashing to the device.
320
321### Flash the binary
322
323Next, attach the board to your computer via a USB-to-serial adapter.
324
325**Note:** If you're using the [SparkFun Serial Basic Breakout](https://www.sparkfun.com/products/15096),
326you should [install the latest drivers](https://learn.sparkfun.com/tutorials/sparkfun-serial-basic-ch340c-hookup-guide#drivers-if-you-need-them)
327before you continue.
328
329Once connected, assign the USB device name to an environment variable:
330
331```
332export DEVICENAME=put your device name here
333```
334
335Set another variable with the baud rate:
336
337```
338export BAUD_RATE=921600
339```
340
341Now, hold the button marked `14` on the device. While still holding the button,
342hit the button marked `RST`. Continue holding the button marked `14` while
343running the following command:
344
345```
346python3 tensorflow/lite/micro/tools/make/downloads/AmbiqSuite-Rel2.2.0/tools/apollo3_scripts/uart_wired_update.py \
347-b ${BAUD_RATE} ${DEVICENAME} \
348-r 1 \
349-f main_nonsecure_wire.bin \
350-i 6
351```
352
353You should see a long stream of output as the binary is flashed to the device.
354Once you see the following lines, flashing is complete:
355
356```
357Sending Reset Command.
358Done.
359```
360
361If you don't see these lines, flashing may have failed. Try running through the
362steps in [Flash the binary](#flash-the-binary) again (you can skip over setting
363the environment variables). If you continue to run into problems, follow the
364[AI on a microcontroller with TensorFlow Lite and SparkFun Edge](https://codelabs.developers.google.com/codelabs/sparkfun-tensorflow)
365codelab, which includes more comprehensive instructions for the flashing
366process.
367
368The binary should now be deployed to the device. Hit the button marked `RST` to
369reboot the board.
370
371You should see the device's blue LED flashing. The yellow LED should light when
372a "yes" is heard, the red LED when a "no" is heard, and the green LED when an
373unknown command is heard. The current model has fairly low accuracy, so you may
374have to repeat "yes" a few times.
375
376Debug information is logged by the board while the program is running. To view
377it, establish a serial connection to the board using a baud rate of `115200`.
378On OSX and Linux, the following command should work:
379
380```
381screen ${DEVICENAME} 115200
382```
383
384You will see a line output for every word that is detected:
385
386```
387Heard yes (201) @4056ms
388Heard no (205) @6448ms
389Heard unknown (201) @13696ms
390Heard yes (205) @15000ms
391```
392
393The number after each detected word is its score. By default, the program only
394considers matches as valid if their score is over 200, so all of the scores you
395see will be at least 200.
396
397To stop viewing the debug output with `screen`, hit `Ctrl+A`, immediately
398followed by the `K` key, then hit the `Y` key.
399
400## Deploy to STM32F746
401
402The following instructions will help you build and deploy the example to the
403[STM32F7 discovery kit](https://os.mbed.com/platforms/ST-Discovery-F746NG/)
404using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
405
406Before we begin, you'll need the following:
407
408- STM32F7 discovery kit board
409- Mini-USB cable
410- ARM Mbed CLI ([installation instructions](https://os.mbed.com/docs/mbed-os/v6.9/quick-start/build-with-mbed-cli.html). Check it out for MacOS Catalina - [mbed-cli is broken on MacOS Catalina #930](https://github.com/ARMmbed/mbed-cli/issues/930#issuecomment-660550734))
411- Python 3 and pip3
412
413Since Mbed requires a special folder structure for projects, we'll first run a
414command to generate a subfolder containing the required source files in this
415structure:
416
417```
418make -f tensorflow/lite/micro/tools/make/Makefile TARGET=disco_f746ng OPTIMIZED_KERNEL_DIR=cmsis_nn generate_micro_speech_mbed_project
419```
420
421Running the make command will result in the creation of a new folder:
422
423```
424tensorflow/lite/micro/tools/make/gen/disco_f746ng_cortex-m4_default/prj/micro_speech/mbed
425```
426
427This folder contains all of the example's dependencies structured in the correct
428way for Mbed to be able to build it.
429
430Change into the directory and run the following commands.
431
432First, tell Mbed that the current directory is the root of an Mbed project:
433
434```
435mbed config root .
436```
437
438Next, tell Mbed to download the dependencies and prepare to build:
439
440```
441mbed deploy
442```
443
444Older versions of Mbed will build the project using C++98. However, TensorFlow Lite
445requires C++11. If needed, run the following Python snippet to modify the Mbed
446configuration files so that it uses C++11:
447
448```
449python -c 'import fileinput, glob;
450for filename in glob.glob("mbed-os/tools/profiles/*.json"):
451  for line in fileinput.input(filename, inplace=True):
452    print(line.replace("\"-std=gnu++98\"","\"-std=c++11\", \"-fpermissive\""))'
453```
454
455Note: Mbed has a dependency to an old version of arm_math.h and cmsis_gcc.h (adapted from the general [CMSIS-NN MBED example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/kernels/cmsis_nn#example-2---mbed)). Therefore you need to copy the newer version as follows:
456```bash
457cp tensorflow/lite/micro/tools/make/downloads/cmsis/CMSIS/DSP/Include/\
458arm_math.h mbed-os/cmsis/TARGET_CORTEX_M/arm_math.h
459cp tensorflow/lite/micro/tools/make/downloads/cmsis/CMSIS/Core/Include/\
460cmsis_gcc.h mbed-os/cmsis/TARGET_CORTEX_M/cmsis_gcc.h
461```
462
463Finally, run the following command to compile:
464
465```
466mbed compile -m DISCO_F746NG -t GCC_ARM
467```
468
469This should result in a binary at the following path:
470
471```
472./BUILD/DISCO_F746NG/GCC_ARM/mbed.bin
473```
474
475To deploy, plug in your STM board and copy the file to it. On macOS, you can do
476this with the following command:
477
478```
479cp ./BUILD/DISCO_F746NG/GCC_ARM/mbed.bin /Volumes/DIS_F746NG/
480```
481
482Copying the file will initiate the flashing process.
483
484The inference results are logged by the board while the program is running.
485To view it, establish a serial connection to the board
486using a baud rate of `9600`. On OSX and Linux, the following command should
487work, replacing `/dev/tty.devicename` with the name of your device as it appears
488in `/dev`:
489
490```
491screen /dev/tty.devicename 9600
492```
493
494You will see a line output for every word that is detected:
495
496```
497Heard yes (201) @4056ms
498Heard no (205) @6448ms
499Heard unknown (201) @13696ms
500Heard yes (205) @15000ms
501```
502
503The number after each detected word is its score. By default, the program only
504considers matches as valid if their score is over 200, so all of the scores you
505see will be at least 200.
506
507To stop viewing the debug output with `screen`, hit `Ctrl+A`, immediately
508followed by the `K` key, then hit the `Y` key.
509
510## Deploy to NXP FRDM K66F
511
512The following instructions will help you build and deploy the example to the
513[NXP FRDM K66F](https://www.nxp.com/design/development-boards/freedom-development-boards/mcu-boards/freedom-development-platform-for-kinetis-k66-k65-and-k26-mcus:FRDM-K66F)
514using [ARM Mbed](https://github.com/ARMmbed/mbed-cli).
515
5161.  Download
517    [the TensorFlow source code](https://github.com/tensorflow/tensorflow).
5182.  Follow instructions from
519    [mbed website](https://os.mbed.com/docs/mbed-os/v5.13/tools/installation-and-setup.html)
520    to setup and install mbed CLI.
5213.  Compile TensorFlow with the following command to generate mbed project:
522
523    ```
524    make -f tensorflow/lite/micro/tools/make/Makefile TARGET=mbed TAGS="nxp_k66f" generate_micro_speech_mbed_project
525    ```
526
5274.  Change into the following directory that has been generated:
528    `tensorflow/lite/micro/tools/make/gen/mbed_cortex-m4/prj/micro_speech/mbed`
529
5305.  Create an Mbed project using the generated files, run ensuring your
531    environment is using Python 2.7: `mbed config root .`
532
5336.  Next, tell Mbed to download the dependencies and prepare to build: `mbed
534    deploy`
535
5367.  Finally, we can run the following command to compile the code: `mbed compile
537    -m K66F -t GCC_ARM`
538
5398.  For some Mbed compilers (such as GCC), you may get compile error in
540    mbed_rtc_time.cpp. Go to `mbed-os/platform/mbed_rtc_time.h` and comment line
541    32 and line 37:
542
543    ```
544    //#if !defined(__GNUC__) || defined(__CC_ARM) || defined(__clang__)
545    struct timeval {
546    time_t tv_sec;
547    int32_t tv_usec;
548    };
549    //#endif
550    ```
551
5529.  If your system does not recognize the board with the `mbed detect` command.
553    Follow the instructions for setting up
554    [DAPLink](https://armmbed.github.io/DAPLink/?board=FRDM-K66F) for the
555    [K66F](https://os.mbed.com/platforms/FRDM-K66F/).
556
55710. Connect the USB cable to the micro USB port. When the Ethernet port is
558    facing towards you, the micro USB port is left of the Ethernet port.
559
56011. To compile and flash in a single step, add the `--flash` option:
561
562    ```
563    mbed compile -m K66F -t GCC_ARM --flash
564    ```
565
56612. Disconnect USB cable from the device to power down the device and connect
567    back the power cable to start running the model.
568
56913. Connect to serial port with baud rate of 9600 and correct serial device to
570    view the output from the MCU. In linux, you can run the following screen
571    command if the serial device is `/dev/ttyACM0`:
572
573    ```
574    sudo screen /dev/ttyACM0 9600
575    ```
576
57714. Saying "Yes" will print "Yes" and "No" will print "No" on the serial port.
578
57915. A loopback path from microphone to headset jack is enabled. Headset jack is
580    in black color. If there is no output on the serial port, you can connect
581    headphone to headphone port to check if audio loopback path is working.
582
583## Deploy to HIMAX WE1 EVB
584
585The following instructions will help you build and deploy this example to
586[HIMAX WE1 EVB](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_board_brief)
587board. To understand more about using this board, please check
588[HIMAX WE1 EVB user guide](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_user_guide).
589
590### Initial Setup
591
592To use the HIMAX WE1 EVB, please make sure following software are installed:
593
594#### MetaWare Development Toolkit
595
596See
597[Install the Synopsys DesignWare ARC MetaWare Development Toolkit](/tensorflow/lite/micro/tools/make/targets/arc/README.md#install-the-synopsys-designware-arc-metaware-development-toolkit)
598section for instructions on toolchain installation.
599
600#### Make Tool version
601
602A `'make'` tool is required for deploying Tensorflow Lite Micro applications on
603HIMAX WE1 EVB, See
604[Check make tool version](/tensorflow/lite/micro/tools/make/targets/arc/README.md#make-tool)
605section for proper environment.
606
607#### Serial Terminal Emulation Application
608
609There are 2 main purposes for HIMAX WE1 EVB Debug UART port
610
611-   print application output
612-   burn application to flash by using xmodem send application binary
613
614You can use any terminal emulation program (like [PuTTY](https://www.putty.org/)
615or [minicom](https://linux.die.net/man/1/minicom)).
616
617### Generate Example Project
618
619The example project for HIMAX WE1 EVB platform can be generated with the
620following command:
621
622Download related third party data
623
624```
625make -f tensorflow/lite/micro/tools/make/Makefile TARGET=himax_we1_evb third_party_downloads
626```
627
628Generate micro speech project
629
630```
631make -f tensorflow/lite/micro/tools/make/Makefile generate_micro_speech_make_project TARGET=himax_we1_evb
632```
633
634### Build and Burn Example
635
636Following the Steps to run micro speech example at HIMAX WE1 EVB platform.
637
6381.  Go to the generated example project directory.
639
640    ```
641    cd tensorflow/lite/micro/tools/make/gen/himax_we1_evb_arc/prj/micro_speech/make
642    ```
643
6442.  Build the example using
645
646    ```
647    make app
648    ```
649
6503.  After example build finish, copy ELF file and map file to image generate
651    tool directory. \
652    image generate tool directory located at
653    `'tensorflow/lite/micro/tools/make/downloads/himax_we1_sdk/image_gen_linux_v3/'`
654
655    ```
656    cp micro_speech.elf himax_we1_evb.map ../../../../../downloads/himax_we1_sdk/image_gen_linux_v3/
657    ```
658
6594.  Go to flash image generate tool directory.
660
661    ```
662    cd ../../../../../downloads/himax_we1_sdk/image_gen_linux_v3/
663    ```
664
665    make sure this tool directory is in $PATH. You can permanently set it to
666    PATH by
667
668    ```
669    export PATH=$PATH:$(pwd)
670    ```
671
6725.  run image generate tool, generate flash image file.
673
674    *   Before running image generate tool, by typing `sudo chmod +x image_gen`
675        and `sudo chmod +x sign_tool` to make sure it is executable.
676
677    ```
678    image_gen -e micro_speech.elf -m himax_we1_evb.map -o out.img
679    ```
680
6816.  Download flash image file to HIMAX WE1 EVB by UART:
682
683    *   more detail about download image through UART can be found at
684        [HIMAX WE1 EVB update Flash image](https://github.com/HimaxWiseEyePlus/bsp_tflu/tree/master/HIMAX_WE1_EVB_user_guide#flash-image-update)
685
686After these steps, press reset button on the HIMAX WE1 EVB, you will see
687application output in the serial terminal and lighting LED.
688
689![Animation on Himax WE1 EVB](https://raw.githubusercontent.com/HimaxWiseEyePlus/bsp_tflu/master/HIMAX_WE1_EVB_user_guide/images/tflm_example_micro_speech_int8_led.gif)
690
691## Deploy to CEVA-BX1
692
693The following instructions will help you build and deploy the sample to the
694[CEVA-BX1](https://www.ceva-dsp.com/product/ceva-bx1-sound/) or [CEVA-SP500](https://www.ceva-dsp.com/product/ceva-senspro/)
695
6961.  Contact CEVA at [sales@ceva-dsp.com](mailto:sales@ceva-dsp.com)
6972.  For BX1:
6982.1. Download and install CEVA-BX Toolbox v18.0.2
6992.2.  Set the TARGET_TOOLCHAIN_ROOT variable in
700    /tensorflow/lite/micro/tools/make/templates/ceva_bx1/ceva_app_makefile.tpl
701    To your installation location. For example: TARGET_TOOLCHAIN_ROOT :=
702    /home/myuser/work/CEVA-ToolBox/V18/BX
7032.3.  Generate the Makefile for the project: /tensorflow$ make -f
704    tensorflow/lite/micro/tools/make/Makefile TARGET=ceva TARGET_ARCH=CEVA_BX1
705    generate_micro_speech_make_project
7063. For SensPro (SP500):
7073.1. Download and install CEVA-SP Toolbox v20
7083.2. Set the TARGET_TOOLCHAIN_ROOT variable in
709    /tensorflow/lite/micro/tools/make/templates/ceva_SP500/ceva_app_makefile.tpl
710    To your installation location. For example: TARGET_TOOLCHAIN_ROOT :=
711    /home/myuser/work/CEVA-ToolBox/V20/SensPro
7123.3. Generate the Makefile for the project: /tensorflow$ make -f
713    tensorflow/lite/micro/tools/make/Makefile TARGET=ceva TARGET_ARCH=CEVA_SP500
714    generate_micro_speech_make_project
7155.  Build the project:
716    /tensorflow/lite/micro/tools/make/gen/ceva_bx1/prj/micro_speech/make$ make
7176.  This should build the project and create a file called micro_speech.elf.
7187.  The supplied configuration reads input from a files and expects a file
719    called input.wav (easily changed in audio_provider.cc) to be placed in the
720    same directory of the .elf file
7218.  We used Google's speech command dataset: V0.0.2:
722    http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz V0.0.1:
723    http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz
7249.  Follow CEVA Toolbox instructions for creating a debug target and running the
725    project.
72610. Output should look like: Heard silence (208) @352ms Heard no (201) @1696ms
727    Heard yes (203) @3904ms
728
729## Run on macOS
730
731The example contains an audio provider compatible with macOS. If you have access
732to a Mac, you can run the example on your development machine.
733
734First, use the following command to build it:
735
736```
737make -f tensorflow/lite/micro/tools/make/Makefile micro_speech
738```
739
740Once the build completes, you can run the example with the following command:
741
742```
743tensorflow/lite/micro/tools/make/gen/osx_x86_64/bin/micro_speech
744```
745
746You might see a pop-up asking for microphone access. If so, grant it, and the
747program will start.
748
749Try saying "yes" and "no". You should see output that looks like the following:
750
751```
752Heard yes (201) @4056ms
753Heard no (205) @6448ms
754Heard unknown (201) @13696ms
755Heard yes (205) @15000ms
756Heard yes (205) @16856ms
757Heard unknown (204) @18704ms
758Heard no (206) @21000ms
759```
760
761The number after each detected word is its score. By default, the recognize
762commands component only considers matches as valid if their score is over 200,
763so all of the scores you see will be at least 200.
764
765The number after the score is the number of milliseconds since the program was
766started.
767
768If you don't see any output, make sure your Mac's internal microphone is
769selected in the Mac's *Sound* menu, and that its input volume is turned up high
770enough.
771
772## Run the tests on a development machine
773
774To compile and test this example on a desktop Linux or macOS machine, download
775[the TensorFlow source code](https://github.com/tensorflow/tensorflow), `cd`
776into the source directory from a terminal, and then run the following command:
777
778```
779make -f tensorflow/lite/micro/tools/make/Makefile test_micro_speech_test
780```
781
782This will take a few minutes, and downloads frameworks the code uses like
783[CMSIS](https://developer.arm.com/embedded/cmsis) and
784[flatbuffers](https://google.github.io/flatbuffers/). Once that process has
785finished, you should see a series of files get compiled, followed by some
786logging output from a test, which should conclude with `~~~ALL TESTS PASSED~~~`.
787
788If you see this, it means that a small program has been built and run that loads
789the trained TensorFlow model, runs some example inputs through it, and got the
790expected outputs.
791
792To understand how TensorFlow Lite does this, you can look at the source in
793[micro_speech_test.cc](micro_speech_test.cc).
794It's a fairly small amount of code that creates an interpreter, gets a handle to
795a model that's been compiled into the program, and then invokes the interpreter
796with the model and sample inputs.
797
798## Train your own model
799
800So far you have used an existing trained model to run inference on
801microcontrollers. If you wish to train your own model, follow the instructions
802given in the [train/](train/) directory.
803