Business analysts understand the business needs and pass the requirements to the data analyst.
Data analysts understand the data requirements and determine what data is needed to address a business problem. Data analyst then collects the right data from an internal source, or by curating the data (eg.: scraping), or from an external source (eg.: open datasets, Kaggle, research projects). The raw data needs to get processed before the machine learning team can use it. This is a critical step and hence ‘become one with data’.
Usually the data is divided into two categories (at high level):
1. Structured Data…
With GDPR and CCPA, enforced cloud companies needs to comply data for fair usage. This is brief framework to get started …
For all those data science developer and companies who are are under the ambit of compliance, are the potential consumer of this framework.
Risk = Asset (individual’s data) + Vulnerability + Threat
Therefore, we need risk management, assessment and treatment with organisation defined ‘controls’.
Risk Management : The coordinated activities to direct and control an organisation with regards to risk.
Risk Assessment : The overall process of risk identification, risk analysis and risk evaluation.
What happens under the hood, lets go …
Let me start by saying, Python is a dynamically typed language and everything in Python is an object. What we get in return are the references to such objects.
With an example:
x = 10
print (“x type is : ”, type(x))----
x type is : <class 'int'>
x is a reference variable to an
integer object 10.
Another reference variable
y is being created, which again points to
integer object 10.
y = x
Check their memory addresses using function
print(“Integer object 10 referenced by variable x”, hex(id(x)))
This article is published during COVID-19 outbreak times
Do you create tons of invoices/bills/statements for your business? Do you need to capture your IoT (Internet Of Things) sensors data into a spreadsheet? Do you want to automate data capture and collaborate with teams? Do you want to scrap WIKIPEDIA or similar web content in a document? etc …
If such thoughts come to your mind related for your project or businesses? Its time to automate and focus on core competency of your business. Check Google G-suite developers APIs. …
Let us take it forward from where we left in part-2. We converted Facenet checkpoint to Facenet frozen model
.pb having just inference branch in it and striping out phase_train from the model. To verify this fact please use Tensorflow
graph_transforms tool, as shown in the image below:
This tutorial is about setting up your local Tensorflow and OpenCV standalone build for C++ implementation. This tutorial is tested with following versions, but should work for all version, except there are some drastic changes in the libraries. Please let me know your findings, if so happens.
Github Repo for, Tensorflow Lite Standalone C++ build for Linux and MacOS here.
sudo apt-get -y update && apt-get -y upgrade
Next, install some base dependencies and tools we’ll need.
There are few options given on the bazel build…
The comprehension in this article comes from FaceNet and GoogleNet papers. This is a two part series, in the first part we will cover FaceNet architecture along with the example running on Google Colab and later part will cover mobile version.
FaceNet is a start-of-art face recognition, verification and clustering neural network. It is 22-layers deep neural network that directly trains its output to be a 128-dimensional embedding. The loss function used at the last layer is called triplet loss.
When state-of-art accuracy is required for face recognition / authentication, Facenet is obvious choice for both Android and IOS platforms. But running Facenet on mobile devices needs some special treatment, this article addresses problem and the potential solution.
On Android, every application has limited memory usage forced by Dalvik VM. Android NDK is not restricted with this limitation but that does not mean native code can consume as much memory as they want…
This article is also published on LearnOpenCV
My OpenCV Android SDK = small size library
If you choose OpenCV for production, your primary goal is to bring down the size of the library and also make it performance packed. OpenCV is an awesome library with tons of algorithms but you must be using a very small subset of these algorithm in your application, hence it makes perfect sense to include what is required and leave out the rest.
Library can be compiled statically along with your application code or can be dynamically linked at runtime and this is completely application…
Fuel for Neural Network, Make GIF, do what you like to …
You need to have a video that you want to break up into images. Later these images can be used for various purposes.
3. Go to ‘video’ → ‘filters’ → ‘scene filter’
4. You will see something like following: