New Android NDK toolchain to build any arbitrary project.

In my previous post, i talked about the building arch specific toolchain. But Google has obsolete standalone toolchain and hence it won’t be possible to build the native toolchain. Its quite inevitable that you can’t stick to r14b (or earlier version before r19c), incase you need newer NDK features (example camera-2 APIs : camera ZOOM ratio, etc). For that you have to start using latest NDK versions i.e. at least version r21e and above.

I have tested following combination but not all NDKs are compatible with OpenCV:

Business analysts understand the business needs and pass the requirements to the data analyst.

Data analysts understand the data requirements and determine what data is needed to address a business problem. Data analyst then collects the right data from an internal source, or by curating the data (eg.: scraping), or from an external source (eg.: open datasets, Kaggle, research projects). The raw data needs to get processed before the machine learning team can use it. This is a critical step and hence ‘become one with data’.

Usually the data is divided into two categories (at high level):

1. Structured Data

With GDPR and CCPA, enforced cloud companies needs to comply data for fair usage. This is brief framework to get started …

For all those data science developer and companies who are are under the ambit of compliance, are the potential consumer of this framework.

Risk = Asset (individual’s data) + Vulnerability + Threat

Therefore, we need risk management, assessment and treatment with organisation defined ‘controls’.

Formal definitions:

Risk Management : The coordinated activities to direct and control an organisation with regards to risk.

Risk Assessment : The overall process of risk identification, risk analysis and risk evaluation.

Risk Treatment

What happens under the hood, lets go …

Let me start by saying, Python is a dynamically typed language and everything in Python is an object. What we get in return are the references to such objects.

With an example:

x = 10
print (“x type is : ”, type(x))
x type is : <class 'int'>

Here, x is a reference variable to an integer object 10.

Another reference variable y is being created, which again points to integer object 10.

y = x

Check their memory addresses using function id().

print(“Integer object 10 referenced by variable x”, hex(id(x)))

This article is published during COVID-19 outbreak times

Do you create tons of invoices/bills/statements for your business? Do you need to capture your IoT (Internet Of Things) sensors data into a spreadsheet? Do you want to automate data capture and collaborate with teams? Do you want to scrap WIKIPEDIA or similar web content in a document? etc …

If such thoughts come to your mind related for your project or businesses? Its time to automate and focus on core competency of your business. Check Google G-suite developers APIs. …

Understanding it, a bit more …

If you have not read my earlier stories about FaceNet Architecture and converting .pb to .tflite, i would suggest going through part-1 and part-2.

Let us take it forward from where we left in part-2. We converted Facenet checkpoint to Facenet frozen model .pb having just inference branch in it and striping out phase_train from the model. To verify this fact please use Tensorflow graph_transforms tool, as shown in the image below:

This tutorial is about setting up your local Tensorflow and OpenCV standalone build for C++ implementation. This tutorial is tested with following versions, but should work for all version, except there are some drastic changes in the libraries. Please let me know your findings, if so happens.

Github Repo for, Tensorflow Lite Standalone C++ build for Linux and MacOS here.

1. Install Dependencies

sudo apt-get -y update && apt-get -y upgrade

Next, install some base dependencies and tools we’ll need.

For Bazel:

There are few options given on the bazel build

Part 1: Architecture and running a basic example on Google Colab

The comprehension in this article comes from FaceNet and GoogleNet papers. This is a two part series, in the first part we will cover FaceNet architecture along with the example running on Google Colab and later part will cover mobile version.

FaceNet is a start-of-art face recognition, verification and clustering neural network. It is 22-layers deep neural network that directly trains its output to be a 128-dimensional embedding. The loss function used at the last layer is called triplet loss.

Converting Facenet (.pb) to Facenet (.tflite)

If you have not read my story about FaceNet Architecture, i would recommend going through part-1. In the next part-3, i will compare .pb and .tflite models.

When state-of-art accuracy is required for face recognition / authentication, Facenet is obvious choice for both Android and IOS platforms. But running Facenet on mobile devices needs some special treatment, this article addresses problem and the potential solution.

On Android, every application has limited memory usage forced by Dalvik VM. Android NDK is not restricted with this limitation but that does not mean native code can consume as much memory as they want…

Subset of OpenCV on Android

This article is also published on LearnOpenCV

My OpenCV Android SDK = small size library

If you choose OpenCV for production, your primary goal is to bring down the size of the library and also make it performance packed. OpenCV is an awesome library with tons of algorithms but you must be using a very small subset of these algorithm in your application, hence it makes perfect sense to include what is required and leave out the rest.

Library can be compiled statically along with your application code or can be dynamically linked at runtime and this is completely application…

Milind Deore

Co-founder Logits Systems Biometric on the edge.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store