Quantcast
Channel: Yudiz Solutions Ltd.
Viewing all 595 articles
Browse latest View live

Google Duplex AI to replace Human Interaction

$
0
0

Overview

Google-Duplex-Image1

During Google I/O event 2018, Google announced Glimpse of technology known as Google Duplex.
Duplex is a new Google Assistant’s feature that can do specific tasks for you over the phone. Duplex will be able to reserve a table in your favorite restaurant, also can schedule an appointment, or call any organization to check for their opening hours.
Launch Google Assistant on your Mobile and say, “Book a table for two at Domino’s Pizza at 8 pm today,” and Duplex will do the rest. The AI feature will call the dominos pizza center and make a reservation for you. After the call, you’ll get a notification on your phone confirming the reservation.
This is the power of AI which is making our life a lot easier.

Can we build it with Dialogflow?

Yes, Dialogflow released the feature called “Dialogflow Phone Gateway ”. In its Beta releases, it will provide telephony interface for your mobile.
Currently, you can use only selected phone numbers hosted by google but in near future all public number can be hosted.
Its working in US-English for now.

Let’s get started

  1. Go to Dialogflow Console,
  2. Create New Agent
  3. Delete whatever default Intent you get
  4. Click the settings next to the agent name
  5. Enable Beta features to work with telephony features
  6. Hit the save button

Now let’s set out new intents,

Click on Settings and from the Export/import tab import this agent,

Google-Duplex-Image2

Once you are done with import save your agent again. In the Sidebar you will find integration tab

Google-Duplex-Image3

Click on Integration Tab and there will you find, Dialogflow Phone Gateway,

Google-Duplex-Image4

To configure this click on that and select the language, English is the only available for now. Country is by default US and other countries will be soon there.

Google-Duplex-Image5

By clicking the NEXT you will get choice to select one number to use for whole conversation.

Google-Duplex-Image6

That’s it, click on Create and you are done with the configuration Part.

Let’s test by Calling the number

Enable Small Talk from the Sidebar,
You can now call the number and follow the simple voice prompts. The interactions are defined in your agent.
“You will be asked to say something“ where you can “terminate” the call or “transfer” the call.

Google-Duplex-Image7

Though you can test this agent – “+1 650-485-1222

What’s the next step?

This was the simple test Example but what for complex one that showed in google Announcement,
To build something like that here is the Developer Notes,

This feature support reach responses like,

Play audio: It will play given audio in response.

Synthesize speech: You can give response like,

<speak>Please begin by saying <break time="0.5s" /> test.</speak>

and it will convert it for you to audio.

Transfer call: Transfers the caller to somewhere else where we can say something like handing over a call to some human.

The speech and voice setting can be changed in the setting of your agent in Speech tab.

Voice: You can choose voice generation model and what’s suitable for you, you can go for that.

Speaking Rate: The speaking rate of agent can be changed.

Pitch: Pitch of the voice can be adjusted as per need.

Volume Gain: Audio volume gain can be adjusted.

Audio Effects Profile: Select audio effects profiles that you want to apply to the synthesized voice. Read More here Audio Profiles.

Conclusion : –

The revolution or techs is started with this to stand up odd in the market adopt this kind of features with your business and engage with your customers.
This was a really easy and simple example that can give you simple answers. Let’s see how it goes and will see you all soon in very next blog with complex example and more human-like conversations.


WPA3 is coming soon!

$
0
0

Overview

Hello guys, Today we’re going to be talking about something that I think everyone can get excited about because it literally is something that one uses, that is WiFi! More specifically, there is a new WiFi security protocol called WPA3 coming out to supersede WPA2 which most of us use now. Now you might be thinking “Oh! That’s boring! How could that be exciting..!” But it actually is very interesting and has some big big improvements over what we use now… There are some issues with WPA2 about which we will be talking and then we can discuss Why WPA3 is very relevant and more convenient for you.

wpa-image1

What is WPA?

So, first of all, if you’re not familiar with WPA and what all that means, WPA(WiFi Protected Access) is just a security protocol for WiFi. These are just to secure the connection between your computer and your WiFi router. And the idea is you don’t want people spying on what you’re doing so you have a passcode and by using that passcode it encrypts the connection.

wpa-image2

At first, we had WEP(Wired Equivalent Privacy) in 1990’s which used 64-bit or 128-bit key size as per the requirement… The main issue we faced in WEP was the 1st byte of the output keystreams were “strongly Non-Random”. So if you just gather enough packets, you could easily discover the entire WEP key..! And once you have the key, you can see all of the data going across the network and of course if you want to send the data on to the network, you’re now able to do that as well!

wpa-image3

Then in 2003, we got successor of WEP which is WPA(WiFi Protected Access). This was a middleman, something that we implemented very quickly. It was not completely standardized at that time. But it was the way of encrypting data in the same hardware we were using with WEP. In WPA, every packet got a unique encryption key. So even if somehow you get the key, you will only be able to decrypt that particular packet.

But it was just a short-term workaround… We needed something that was standardized and that’s where we came up with WPA2. WPA2 used a cipher called AES that is Advanced Encryption Standard. Unfortunately, it was the cipher that required a lot more CPU cycles… So we had to upgrade mini cases from our old hardware to a brand new Access Point(AP). It also used Counter Mode Cipher Block Chaining Message Authentication Code Protocol which we happily call CCMP which is a much more secure protocol to use for authenticating and making sure that the data within the packet is exactly where it came from.

Hole 196 in WPA2:

“Hole 196” is the name of WPA2 vulnerability. The vulnerability is, in fact, buried on the last line on page 196 of the 1232-page IEEE 802.11 Standard. And that’s why it was named as “Hole 196.”

“Hole 196” vulnerability could lead to a potentially fatal insider attack, where a licensed person or an authorized person can bypass the WPA2 private key encryption and authentication to scan the authorized devices for vulnerabilities, install malware on these and steal personal or confidential data from the devices.

What’s New in WPA3?

I’m sure at this point you’re wondering what the actual differences are with the WPA3 and there are 4 main improvements that the WiFi Alliance has announced. Although we won’t be able to use it right away, it is a huge step for wireless security and great news for laptop and smartphone users everywhere.

1. Brute Force Protection:

wpa-image4

Even if someone has a bad password, it’s going to prevent Brute Force attacks by actually limiting how often you can guess the password. So even if a hacker tries to Brute Force it or uses the dictionary attack, it’s going to be so slow where it probably wouldn’t even be worth it. But of course! You still want to use a relatively strong password… Because you can never be too secure.

2. Individualized Encryption:

So.. what this means is that even if you’re connecting without a password, if you’re using a public WiFi hotspot; your connection will still be encrypted! Which solves so many problems because we’ve been talking in the past about how if you’re even at like a hotel or if you’re at Starbucks or something using the WiFi, you would have to use something like VPN to encrypt all your traffic through a tunnel before you use the WiFi hotspot. Because obviously again, you do not want anyone listening in on that.

wpa-image5

Another bad thing, even if you do use a secure password everyone else is using that same password!! So if one person knows the password, they can decrypt everyone else’s connection…! That means you’re not really any more protected.

So presumably the Individualized Encryption means that every single person is going to have their own encryption key so you’re going to be secure no matter what.

3. Stronger Encryption:

It will use a 192-bit encryption key length which is a lot stronger than the current one which is a 128-bit key which is still yet to be cracked! But again I guess they’re figuring future proof..!

4. Easier WiFi device connection:

It makes a lot easier for devices with very small screens or no screen at all to connect your WiFi hotspot! So imagine a smart device at home, you want to connect it to your WiFi network… well, obviously that might not have a screen on it and especially not the keyboard for you so you can type the password into! Right now, with WPA2, maybe that device creates a WiFi hotspot, then you use your phone to connect to that smart device’s WiFi hotspot and then type in the password and then disconnect from that and then it connects to the WiFi hotspot and use the password you just typed in… It’s a mess right?! -_-

Well, With WPA3’s new “WiFi Easy Connect” you just need to scan some code in your phone to connect the device!!

wpa-image6

So obviously I would say that all of these new features are awesome and there might be small ones they add in that they haven’t really talked about… those are the main 4.

When’ll We Be Able to Use WPA3?

You might be wondering when are we going to use this awesome new security… Well, it’s actually out right now. The standard is out and finalized but the first devices that use it might take a while… probably at the end of next year, I’m sure we’ll start to get phones that will actually use it. Though it will be backward compatible, so if you’re running a router that uses WPA3, then it will be able to use the same WPA2 security if that’s all that a phone would support but it’ll still be able to do WPA3 if there are devices that you use that have it.

wpa-image7

Even with all of these, it’s not like WPA2 is going to be disappearing anytime soon! First of all because obviously it is going to take a while for WPA3 to be implemented in new devices and there are probably a lot of devices which might never be updated!! So, they would have to still be supported in long run as well… So don’t worry if you’re not going to upgrade your devices, you’ll still be able to use WPA2. It’s not like you’re insecure! But, still good to keep an eye on any device that has WPA3, that could be an awesome feature…!

Facebook AR Studio

$
0
0

Overview

Thinking about recent, top trending topics in technology, two things come to my mind : Elon Musk’s Mars mission and Snapchat’s Dog Filter. 😀 😀

Snapchat brought it’s AR filters into action a few months back which certainly got too trending to describe. But it has a limitation from developer’s point of view. Snapchat doesn’t provide an open platform for developers to contribute to AR filters.

Facebook too has an AR filter functionality in it’s app. Now, it has opened doors for developers to build their custom filters and to share them publicly via Facebook app. Facebook has a tool which they call AR studio using which we can build interesting, interactive and life-like filters. Building simple effects using face-tracking or hand-tracking is a piece of cake.

Facebook provides a great and an easy-to-catch-up series of video tutorials regarding AR studio. Here is the link : https://developers.facebook.com/docs/ar-studio/tutorials
I’ll demonstrate one of the samples from this link.

Prerequisites

We’ll need following components to start our fun-filled journey:

Here is the initial screen of AR studio.

ar-studio-image1

Creating a new project will output following screen. I have mentioned necessary nomenclature in the image itself.

ar-studio-image2

Practical

We’ll build a demo wherein a beard, eyebrows and a moustache will be placed on the face and it’ll be tracked with the face movements.

Step 1: Insert a face-tracker from the insert button in toolbar. Now, there’ll be axis (x, y and z) on the face, tracking it as it moves.

ar-studio-image3

Step 2: Insert a facemesh from the insert button in toolbar that will cover the entire face.

ar-studio-image4

Step 3: Create a material and apply texture to it.

Select facemesh from Scene tab and add a material to it from inspector menu by clicking + sign.

ar-studio-image5

Now, it’s time to apply texture to material. Texture is an image of beard, moustache and eyebrows as shown below. On an unrelated note, such textures and 3D objects can be built by 3D artist using tools like Maya and Blender. This studio supports models like .fbx, .obj and .dae.

ar-studio-image6

Select the material from assets panel and add the texture to it from inspector panel as shown below

ar-studio-image7

That’s it. You now have an AR effect ready to be previewed.

You can preview in a physical device using Mirror icon of toolbar in top-right of window.
The effect can be exported using Export icon next to Mirror icon.

To make your effects publicly available, you’ll need to submit them to facebook. They’ll review it and will process them further. There are steps and conditions that should be followed and fulfilled respectively in this process. Have a look here : https://developers.facebook.com/docs/ar-studio/docs/submitting/

A lot can be done with AR studio. Hand-tracking, plane-tracking, creating raining effect, detecting gestures are a few of the many capabilities of the studio.

Here is a snap showing few samples.

ar-studio-image8

Facebook AR at Yudiz

We, here at Yudiz, are focusing on building a fantasy world which can be entered through a portal, using Facebook AR studio.

Conclusion :-

There are over 2.23 billion monthly active users on Facebook. Facebook AR studio is a great platform to reach out to billions of users and to display our creativity through Facebook and AR. Apart from statistical point of view, such augmented reality is here to stay for a long time as it increases user interaction with the app which ultimately increases user base.

Manage Your React State with Redux

$
0
0

Overview

Now a days hottest libraries in front-end development is Redux. Lots of developers are confused about what it is and what its benefits are.
Redux is a traditional library or a framework like AngularJS. Redux converts state of application in a single immutable state tree which is not directly changed. Dan Abramov create a state management it is Redux. It can do hot reloading, implement logging, time travel,record and replay.

Redux is a library

Redux and React are actually two different libraries which can and have been used independent of each other. Redux can be used with other languages also like Angular, Ember, jQuery, or vanilla JavaScript so on.

redux

Redux Architecture

Redux architecture and how everything is connected together can be seen in below image. When you’re learning Redux, there are a few core concepts that you need to get used to with Reducers, Store, Dispatch / Action, Subscribe.

redux-architecture

Install Redux

First, you have to create react simple app with create-react-app then you have to install redux :

npm install --save redux

React with Connect to Redux

You want to link react site to the redux, let your web know that store exists.
Redux is given by the “react-redux” library, the first major part of this library, which is the Provider.

npm install --save react-redux

Provider

Provider serves one purpose to “provide” the store to its child components. It create stores access to it’s children and you want your whole app to access the store so you can put your App component with in provider. Provider is “wraps” the entire application tree. Only components within provider can be connected.

redux-provider
Index.js

import { createStore } from 'redux';
import { Provider } from 'react-redux'
import App from './App';

const store = createStore();
ReactDOM.render(<Provider store={store}>
<App /></Provider>,document.getElementById('root'));

Store

The store is one big JavaScript object that has tons of key-value pairs that represent the current state of the application. Unlike the state object in React that is sprinkled across different components, you have only one store. The store provides the application state, and every time the state updates, the view rerenders.

However, you can never mutate or change the store. Instead, you create new versions of the store.

(previousState, action) => newState

Because of this, you can do time travel through all the states from the time the app was booted on your browser.
The store has three methods to communicate with the rest of the architecture. They are:

  • Store.getState()—To access the current state tree of your application.
  • Store.dispatch(action)—To trigger a state change based on an action. More about actions below.
  • Store.subscribe(listener)—To listen to any change in the state. It will be called every time an action is dispatched.

Action/Dispatch Creators

Actions are also plain JavaScript objects that send information from your application to the store. If you have a very simple counter with an increment button, decrement button, add button and subtract button pressing increment button will result in an action being triggered that looks like this:

{
  type: "INCREMENT",
  value: 1
}

They are the only source of information to the store. The state of the store changes only in response to an action. Each action should have a type property that describes what the action object intends to do. Other than that, the structure of the action is completely up to you. However, keep your action small because an action represents the minimum amount of information required to transform the application state.
For instance, in the example above, the type property is set to “INCREMENT”,”DECREMENT” and an additional payload property is included. You could change the payload property to something meaningful. Now dispatch to the store like this.

store.dispatch({type: "INCREMENT"});
store.dispatch({type: "DECREMENT"});

Subscribe

You are going to watch to any changes in the store, and then log the current state of the store.

store.subscribe( () => {
    console.log("State has changed"  + store.getState());
})

So how do you update the store? Redux has something called actions that make this happen.

Connect

You can connect your components to provider. You already established that there is no way to directly interact with the store. You can either retrieve data by obtaining its current state and change its state by dispatching an action.

redux-connect
This code uses connect to map the stores state and dispatch to the props of a component :

import { connect } from 'react-redux'

class Counter extends Component {

   render() {
       return (
           <div>
               <CounterOutput value={this.props.ctr} />
               <CounterControl label="Increment" clicked={this.props.onIncrementCounter} />
               <CounterControl label="Decrement" clicked={this.props.onDecrementCounter} />
               <CounterControl label="Add 5" clicked={this.props.onAddCounter} />
               <CounterControl label="Subtract 5" clicked={this.props.onSubtractCounter} />  
           </div>
       );
   }
}
const mapStateToProps = state => {
   return {
       ctr: state.counter,
      }
}
const mapDispatchToProps = dispatch => {
   return {
       onIncrementCounter: () => { dispatch({ type: 'INCREMENT' }) },
       onDecrementCounter: () => { dispatch({ type: 'DECREMENT' }) },
       onAddCounter: () => { dispatch({ type: 'ADD', value: 5 }) },
       onSubtractCounter: () => { dispatch({ type: 'SUBTRACT', value: 5 }) },

   }
}
export default connect(mapStateToProps, mapDispatchToProps, null, { pure: false })(Counter);

Functions mapStateToProps are provide “state” and mapDispatchToProps “dispatch”to store and return an object, it’s keys will then be passed on as the props of the component they are connected to.
In this case, mapDispatchToProps returns an object with key : “onIncrementCounter”, ”onDecrementCounter”, ”onAddCounter”, ”onSubstractCounter” and mapStateToProps returns an object with the ctr key.
The connected component (which is exported) provides “onIncrementCounter”,”onDecrementCounter”,”onAddCounter” ,”onSubstractCounter” and ctr as props to Counter.
Third option in connect() is mergeProps and forth one is options, both are optional in connect().In the connect’s fourth options object there is a pure parameter it is Boolean. If it is true then connect() will avoid re-rendering.
When you want to retrieve data, you don’t get it directly from the store. Instead, you get a snapshot of the data in the store at any point in time using store.getState(). To obtain data you need to get the current state of the store.

Reducer

An action or dispatch describes the problem, and the reducer is responsible for solving the problem. In the earlier example, the incrementCount method returned an action that supplied information about the type of change that you wanted to make to the state. Reducer use this information to update the state. There is a point highlighted in the docs that you should always remember while using Redux.
Reducer should calculate the upcoming state and return it. No surprises. No side effects. No API calls. No mutations. Just a calculation.
What this means is that a reducer should be a pure function. A set of inputs should return the same to same output. Beyond that, it shouldn’t do anything more. A reducer is not the place for side effects such as making AJAX calls.
Let’s fill in the reducer for your counter.
Reducer.js

const initialState = {
counter: 0,
   results: [ ]
}

const reducer = (state = initialState, action) => {
   switch (action.type) {
       case 'INCREMENT':
           return {
               ...state, counter: state.counter + 1
           }

       case 'DECREMENT':
           return {
               ...state, counter: state.counter - 1
           }

       case 'ADD':
           return {
               ...state, counter: state.counter + action.value
           }

       case 'SUBTRACT':
           return {
               ...state, counter: state.counter - action.value
           }

       default:
           return {
              ...state,
               counter: state.counter
           }
   }

   return state;
}
export default reducer;

You don’t mutate the state. You create a copy with Object.assign(state, { counter: state.counter + action.value }) is also wrong: it will mutate the first argument. You must supply an empty object as the first argument. You can also enable the object spread operator proposal to write { …state, counter: state.counter + 1 } instead. You return the previous state in the default case. It’s important to return the previous state for any unknown action.

Summary

This docs was meant to be a starting point for managing state in React with Redux. I have covered all the things and understand the basic Redux concepts such as the provider, connect, store, dispatch / actions and reducers. Towards the end of the docs, I have also created a working redux demo counter. Although it wasn’t much, we learned how all the pieces of the puzzle fit together.

Fastlane : An automated app deployment tool – Part 1

$
0
0

Overview

After app development, the next step is to make app publicly available using play store or app store. Publishing an app is a lengthy process which includes generating a signed apk, uploading it, uploading screenshots, filling out forms which include app description and many more such tasks. Updating an already published app is also includes most of these steps. Which is a monotonous task.

Fastlane is an automated process of app uploading which performs most of these tasks by itself with just few commands. Fastlane, which is already joined Twitter’s Fabric, is now joining hands with Google’s Firebase. That’s a great news…!

Introduction

Before getting started with fastlane, we, as a developer, need access and rights from client’s play store account. For that, we need Google Play JSON Secret Key.

The Google Play JSON Secret Key is useful for managing the permission and access for uploading and handling the specific app. Whether it is a regular process or an automated uploading process like fastlane.

Prerequisites

Google Play Developer Console Account is necessary for generating the Google Play JSON Secret Key. So first of all create the Google Play Developer account if you are not created before.

Practical

Step 1: First of all open the Google Play Developer Console and select Settings tab.

fastlane-2

Step 2: Now click on the API access from the Setting List and you will find the one button at the bottom named Create Service Account.

fastlane-3

Step 3: Now click on the Create Service Account button and you will see the following dialogue. Now click on the Google API Console and new windows will open.

fastlane-4

Step 4: Now click the Create Service Account button at the top of the developers console screen.

fastlane-5

Step 5: Then Create Service Account screen will pop-up and they will ask for the following details like :

  • Enter the Service account name.
  • Select the Role.
  • Check the option of Furnish a new private key
  • Than check the JSON as a key type.
  • Hit the Save button.

fastlane-6

Step 6: Now Back to the Google Play developer console, click on the Done button to close the dialogue.

fastlane-7

Step 7: Click on the Grant Access button from the Service Accounts List.

fastlane-8

Step 8: Now one pop-up window will appear on the screen for granting the permissions.

  • Do NOT edit the Email field.
  • Select Access Expiry Date as Never.
  • Select Administrator or Release Manager from the given role.
  • Select the App by clicking on Add an App Dropdown if you want to give the access for any particular app otherwise put remains as it is.
  • Click on the Add User.

fastlane-9

That’s it you are Done. Now you can upload your JSON secret file wherever you want to use which is downloaded in Step 4.

Fastlane : An automated app deployment tool – Part 2

$
0
0

Overview

The Fastlane is the automated tool for the entire app store deployment process.

Fastlane has following capabilities :

  • Make a repeatable custom work process to build, upload and distribute new releases to the play store.
  • Upload and deal with the greater part of your application’s metadata including screen captures.
  • Automatically submit new versions of your application for review.

Introduction

Fastlane helps developers by publishing/uploading an app along with all the necessary details with just a few commands. But before having this advantage, we need to set it up with the app.

In this part of the series, I’ll explain the core functionality of Fastlane, starting right from it’s installation process

How to install fastlane

Open the terminal and run the following command for installation.

sudo gem install fastlane -NV

Now reach at the project directory in which you have to install the fastlane for that particular project.
fastlane-2-1

Setup the fastlane for the Project

Setup the Fastlane for the particular project by executing the following command from inside the specific project folder.

fastlane init

  1. Please enter the package name for your application when asked. (Ex. com.yudiz.example)
  2. Enter the path for the json secret file for that application access. For more information please click here.
  3. Press ‘N’ when asked if you plan on uploading info to Google Play Store via fastlane.

fastlane-2-2

Now you can able to watch the fastlane folder inside the project with configuration as on the base of your provided information.

Now you can find the two files inside the fastlane configured folder.

  1. Appfile -> which will define the global configuration information to your app.
  2. Fastfile -> which will define the ‘lanes’ that drive the behavior of fastlane.

What is the supply in fastlane?

Supply is a fastlane tool that transfers application metadata, screenshots and binaries to Google Play Developer Console.

Collect your Google Credentials from Google Play Developer Console

Now you have to download the .json credentials file from google developer console for uploading the application using the fastlane. Click here for detailed document

Make Configuration for Fastlane

Go into your project folder then open the fastlane > Appfile. Now change the path for the json_key_file and save them.

json_key_file ("Enter your json file path here")

Environment variable setup

Fastlane requires some condition factors set up to run effectively. Specifically, having your locale not set to an UTF-8 locale will cause issues with building and uploading your assemble.

export LC_ALL=en_US.UTF-8

export LANG=en_US.UTF-8

Configure build.gradle for creating Release Apk

We have to configure of release apk keystore, alias and password in the build.gradle file for generating the release apk using fastlane.

android {
   signingConfigs {
       config {
           keyAlias 'Add Your Key Alias'
           keyPassword 'Enter Key Password'
           storeFile file('/Users/yudiz/Documents/keystore.jks') //Change with your path
           storePassword 'Enter Password'
       }
   }
}

defaultConfig {
     signingConfig signingConfigs.config
}

Fetch your App Metadata from Google Play Console

In the event that your application has been made on the Google Play Developer Console, you’re prepared to begin utilizing supply to manage it! Run:

fastlane supply init

Now all of your Google Play Store Application metadata is downloaded at fastlane supply init.
Because of confinements of the Google Play API, supply can’t download screenshots or videos.

Deploy your application on Google Play Store

Now you have to add the fastlane plugin for incrementing the version code and you have to create the lane for the supply of the application on the play store.
Add the following code in your ./fastlane/Fastfile.

platform :android do
  before_all do
    # ENV["SLACK_URL"] = "https://hooks.slack.com/services/..."
    increment_version_code
  end

  desc "Submit a new debug Build"
  lane :debug do
    gradle(task: "assembleDebug")
  end

  desc "Submit a new Beta Build to Crashlytics Beta"
  lane :beta do
    gradle(task: "clean assembleRelease")
    supply(track: 'beta')
  end

  desc "Create a new version to the Google Play"
  lane :deploy do
    gradle(
      task: 'assemble',
      build_type: 'Release'
    )
    supply
  end

  # You can define as many lanes as you want

  after_all do |lane|
    # This block is called, only if the executed lane was successful

    # slack(
    #   message: "Successfully deployed new App Update."
    # )
  end

end

Now run the command for installing the fastlane plugin for incrementing the version code of apk in the terminal and if they ask to add it in Gem file then press Y. You have to add only once for each project.

fastlane add_plugin increment_version_code

After installing the plugin now run the fastlane deploy command for the deployment process or run the fastlane beta command if you want to deploy it in the beta version.

fastlane-2-3

Its Done.

Google Developer Console Registration

$
0
0

Overview

After an android app development, we need to publish the app on Play Store to make it publicly available. The first step in the process it to get a Google Developer Console account. This blog explain step-by-step process of purchasing a Google Developer Console account.

Introduction

To publish Android apps on Google Play, you’ll have to make a Google Play Developer account using the following link :- Google Developer Console Registration Url.

Steps

  1. Sign Up and accept the Developer Distribution Agreement of Google Play Developer account.
  2. Pay enrollment charge.
  3. Fill your account details.

Example

You have to login with your google account.

google-dev-image1

Now you can able to see the following screen as shown below.

google-dev-image2

You have to accept developer agreement. And click on Continue to Payment Button.

google-dev-image3

Note:- Please remember that your debit/credit card must be enabled with the international transaction facility and it is registered on VBV(Verified By Visa).

google-dev-image4

After successful registration you can see the following screen and you will also get an email from google for the transaction detail of google developer account in your gmail account. Please don’t delete it and make it as a star message and send this mail to your another alternative mail account as a backup. Because in future when you have to transfer your application from one developer account to the different developer console account at that time you need this Transaction Id for the app transfer.

google-dev-image5

That’s it. You’re Done.

VR with native android

$
0
0

Overview

When your two of the most interesting things come together, results are nothing but pure fun! This is the case for me here. With Google’s help, VR apps can be developed in native android.

Google has provided a VR SDK for android developers to take a dive into the Virtual Reality. Integrating basic VR functionality with this SDK is easier than falling asleep! 😀

Here is a link for you to quickstart – https://developers.google.com/vr/develop/android/get-started

Let’s get ready for experiencing VR practically, but watch out! 😀

android-vr-image1

Practical

We’ll concentrate on very basics i.e. loading image and video in VR.
Here’s what you need to get started:

  • A VR cardboard
  • An android device with OS 4.4+
  • A 360 image and a 360 video

You can use google’s app – Cardboard Camera (https://play.google.com/store/apps/details?id=com.google.vr.cyclops&hl=en_IN) to capture 360 images.

It’ll produce an image with .vr.jpg extension. But, the SDK supports .jpg. So, we’ll have to convert it and that can be done using this tool – https://storage.googleapis.com/cardboard-camera-converter/index.html

I have captured a view from my vicinity using this app and got it converted to .jpg. This is an image consisting of vertically stacked panoramas. We can also have single image panorama loaded in VR.

android-vr-image2

I also have a demo 360 video.

Place these files in assets folder.

android-vr-image3

The views provided by VR SDK to load 360 media are so light that they can be used within a fragment. Here, for simplicity, we’ll use them with activities. We’ll have 2 activities, 1 for 360 image and another for 360 video.
First of all, include libraries in gradle file for image and video.

implementation 'com.google.vr:sdk-panowidget:1.10.0'
implementation 'com.google.vr:sdk-videowidget:1.10.0'

I have 2 buttons in my MainActivity.java which redirect me to ImageActivity.java (for 360 image) and VideoActivity.java (for 360 video).

Skipping the regular navigation code, let’s have a look at core part.

1) 360 image

Place VrPanoramaView in xml.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
   xmlns:tools="http://schemas.android.com/tools"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:gravity="center"
   tools:context=".ImageActivity">

   <com.google.vr.sdk.widgets.pano.VrPanoramaView
       android:id="@+id/vr_image"
       android:layout_width="match_parent"
       android:layout_height="300dp"
       android:layout_margin="5dp"
       android:scrollbars="none" />

</LinearLayout>

In java file, load the image from assets after converting it into bitmap.

private void loadImage() {
   VrPanoramaView.Options viewOptions = new VrPanoramaView.Options();
   viewOptions.inputType = VrPanoramaView.Options.TYPE_STEREO_OVER_UNDER;

   try (InputStream istr = getAssets().open("image.jpg")) {
       vrImageView.loadImageFromBitmap(BitmapFactory.decodeStream(istr), viewOptions);
       BitmapFactory.decodeStream(istr);
   } catch (IOException e) {
       Log.e("", "Could not decode default bitmap: " + e);
   }
}

One thing to note here is video options. I have used STEREO_OVER_UNDER type. We have 2 options :

  • STEREO_OVER_UNDER – for image which contains equally sized panoramas, stacked vertically (as the pic captured by me).
  • MONO – for image containing a single panorama.

2) 360 video

Place VrPanoramaView in xml.

<com.google.vr.sdk.widgets.video.VrVideoView
   android:id="@+id/vr_video"
   android:layout_width="match_parent"
   android:layout_height="300dp"
   android:layout_margin="5dp"
   android:scrollbars="none" />

Now, just load the video from assets into the view.

private void loadVideo() {
   try {
       vrVideoView.loadVideoFromAsset("video.mp4",
               new VrVideoView.Options());
   } catch (IOException e) {
       e.printStackTrace();
   }
}

We also get a listener for video view. We can loop it by manual coding.

vrVideoView.setEventListener(new VrVideoEventListener() {
   @Override
   public void onCompletion() {
       vrVideoView.seekTo(0);          //loop
   }
});

We can even sync a seek bar with video view, depending on our requirements.

Both the views provide an option for Cardboard mode. Here is the screenshot.

android-vr-image4

That’s it…! Didn’t it finish before blinking an eye? 😀

Video

VR at Yudiz

Talking about native app development, we are concentrating on implementing more detailed concepts of VR and are trying to integrate it with AR technologies, like portals.

Conclusion

An innumerable amount of apps can be imagined when such 2 vast concepts, VR and android native collaborate with each other. As this is the future of mobile application development, building such apps, right from the start of an era, will be a lot beneficiary.


Native vs React Native – When to use which one?

$
0
0

Overview

Before developing any mobile application, do you ever feel confused about which approach to use for creating your project application? Native or React Native?

Don’t worry then. In this talk I’m gonna discuss this topic in detail and I’ll try my best to sort out all your confusion correctly.

Terminologies

In mobile app development there are three kinds of apps :

  1. Native Apps
  2. Web Apps
  3. Hybrid Apps

The native applications are the applications developed or focused to run on single platform or operating system like Android, iPhone or Windows. Android studio (uses Java or Kotlin for app development) and XCode (uses Swift or Objective-C for app development) are popular native app IDE for Android and iPhone respectively.

While web apps are the applications loading the web pages from the website hosted on a remote server. Laravel and Angular are some of the well-known frameworks to develop the web apps.

Hybrid apps are the apps developed to run on multiple platforms (like Android and iPhone OS) writing a single codebase. React Native (JavaScript framework introduced by Facebook to create mobile apps for Android and iOS) and Flutter (SDK introduced by Google to create mobile apps for Android and iOS) are some of the well-known frameworks to develop the hybrid apps.

In this talk, we’ll focus only the concepts related to Native vs React Native.

Application aspects

  1. Single Code Base
  2. Development Cost and Time
  3. Hot Reload
  4. Maintenance
  5. Performance
  6. UI/UX
  7. Security
  8. Adaptability
  9. Development Community
  10. API Accessibility
  11. Native Modules Support
  12. Lack of libraries
  13. Interaction with other apps
  14. Dependency on Platform Providers
  15. Application Scope

Prepare a cup of tea for yourself and be comfortable because this talk is gonna be tough debate between Native vs React Native.
We’ll discuss the Native vs React Native approach based on some key factors which plays an important role for any successful mobile application.

The below graph shows the dominating punches of React Native over Native approach. We’ll discuss all of them one by one. We’ll also conclude the dominating approach after every key points.

native-image1

Now, Let’s get started…

1. Single Code Base :

Native apps doesn’t support cross platform compatibility. But, with React Native we can build apps supported on both Android as well as iPhone platform at the same time which reduces a headache for developers to write code for a single application on both frameworks.

Dominating Approach : React Native

2. Development Cost and Time :

Native apps doesn’t support code portability or cross-platform support. So, it needs to be developed separately for Android as well as for iOS Platform, which needs extra efforts either it is in terms of manpower as well as planning and resources. So, organizations have to allocate separate developers and resources for Android and IOS frameworks
which ultimately results in greater development cost and time. The development time of React Native apps is 30-35% less than Native Apps.

Dominating Approach : React Native

3. Hot Reload :

In native frameworks, while testing apps or changing the bit of source code we have to build and run the whole project to check whether changes are reflected in app or not. This increases the development time for the application. But, React Native has a feature called Hot Reload which doesn’t need to rebuild the project every time instead it immediately deploys all the changes performed in code and can be viewed on a device.

Dominating Approach : React Native

4. Maintenance :

Maintenance in native framework app requires more attention as the bugs have to be solved on both the platforms. While in React Native bugs need to be resolved at one place and you’re done. But, if any issue is related to native modules in React Native then user have to solve it on both the native modules. Then, maintenance for Native and React Native takes equal time.

Dominating Approach : React Native

You may have found it favorable to pick up React Native approach on the basis of above discussion as there is no dominating key-factor from the native side.
But wait.. the master-strokes from Native side takes place from here and can be viewed in the below graph.

native-image2

Starting with the Performance,

5. Performance :

Performance is the key factor to measure the efficiency and flexibility of app created using Native vs React Native framework. The performance provided by the apps built in Native apps are always better than apps developed in React Native framework in terms of CPU, GPU and Memory usage. As we know, JavaScript (used for React Native) is fast but not suitable for heavy calculations compared to Java, Kotlin or Swift. JavaScript has only single dedicated device thread. So, it is difficult to manipulate asynchronous tasks in React Native compared to Native app development. If your app needs advanced features, complex manipulations, and hardware integrations then it is advisable to use Native framework. Because so many modules and functionalities are not supported by React Native. Even Facebook doesn’t use React Native for the whole app. They use
mixed approach of React Native and Native to build their apps.

Dominating Approach : Native

6. UI/UX :

To create a complex user interface like navigation patterns, custom views, app-specific components (like EditText in Android or TextField in IPhone), smoother transitions and animations are difficult to design using React Native as compared to Native App development. Also, the UX for Native frameworks are somewhat more flexible and responsive as each and every screen is designed separately for android as well as iPhone. It is difficult to match the UX expectations for both the platforms at the same time. Also, the size of application in React Native is bigger than Native framework.

Dominating Approach : Native

7. Security :

Native app framework uses Java or Kotlin for Android app development while Swift or Objective C for iOS app development. React Native uses JavaScript as the core development language. As we all know Javascript is not strongly typed language and is Object Based Language (the language which doesn’t support all the features of OOPs like Polymorphism and Inheritance), it has many flaws and breaches in its designing pattern and syntax compared to strongly typed languages. Hence, building apps using React Native framework may find it somewhat difficult to detect simple errors and break-points.

Dominating Approach : Native

8. Adaptability :

As a beginner in mobile development, you will think about which one is easily adaptable Native Framework or React Native Framework?

But, the answer is somewhat complicated. Because, React Native framework is easy to learn as it uses JavaScript (also some basic CSS and HTML skills) language to code which is very handy to learn. But, keep in mind that ease comes with a cost. Since JavaScript is weakly typed and interpreted language, it is more complicated to detect simple errors and to fix them at early stages of compiling. Java or Kotlin (used in Android Framework) and Swift or Objective-C (used in IPhone framework) are continued to get enhanced and aren’t actually flawed in terms of efficiency and scalability.
Most of the React Native apps need to use Native modules for specific functionality. So, to work with Native modules, developers must also have skill-set in Native framework languages. Ultimately, React Native app developers also need to learn Native framework languages.

Dominating Approach : Native

9. Development Community :

As we all know, Native framework languages are mature and have large development community over the internet. We can find most of the bugs and questions we need to know for our project development from different sites and topic forums. As React Native is growing and doesn’t have enough developer community to reach all over the world, sometimes, we have to figure out many complicated stuff ourselves for many project related functionalities which raises headache to manage things.

Dominating Approach : Native

10. API Accessibility :

All of the APIs offered by Native platforms can be accessible using Native framework. There is no extra bridge or connection layer to access required APIs for Native framework. On the other side, React Native can access most of the APIs provided by platform but, not all of them. So, if there is need to implement the APIs which are not accessible by React Native, then it is hard to solve this kind of situation. To overcome this, we have to add Native modules support. In short, Native apps are fully device integrated while React Native apps are partially device integrated.

Dominating Approach : Native

11. Native Modules Support :

The APIs we need to use in React Native which are not accessible then we have to add Native modules support. Native modules are the modules written in Native languages which we have to incorporate in React Native framework as part of it. This is a better solution to add support for unaccessible APIs in React Native. But, it requires developers to have knowledge of the native language, which is something that has been avoided by the developer while choosing the React Native. It also requires to code Native modules for both the languages separately.

Dominating Approach : Native

12. Lack of Third Party Libraries :

As we all know, that React Native is growing and has a minor development community and also lesser third party libraries, so, to use any third party library developed in a native platform, you have to associate the changes in Native modules separately. Also, you have to find that native library for both the platforms to add support.

Dominating Approach : Native

13. Interaction with Other Apps :

Native apps can easily connect and communicate with other native apps. They can easily pass data to other apps. Also, native apps can easily access camera, contacts, settings and many other apps. Whereas in React Native, interaction with other apps is very poor. Most of the time we need to use third party libraries and extensions to communicate with other apps. Also, retrieving results from other apps is very difficult.

Dominating Approach : Native

14. Dependency on Platform Providers :

Every new update released to the specific Native platform brings many evolutionary changes. So, whenever any platform-specific updates are released can be instantly received on Native frameworks. While React Native is developed by Facebook, so any latest platform-specific updates or changes released cannot be directly obtained for your React Native project. Either, you have to wait until Facebook releases new updates or you have to create custom bridges in your Native modules for custom project requirements.

Dominating Approach : Native

15. Application Scope :

If you’re targeting your application to provide frequent updates and deploy user friendly features in your app then it is better to select Native approach. Because, you can easily implement the required functionality and resolve bugs at any time in Native while, React Native app needs to deal with both the platforms at the same time. So, particular API that is not accessible by React Native or any third party library you want to add in your future updates or any other native issue comes up then it will be a total mess. But, if you’re not so concerned about future updates and bug resolving for your apps then you can pick React Native.

Dominating Approach : Native

So, these are the key factors to decide which app development approach to use based on our project requirements.

Yeah…sure!!! I’m too done with this unending debate.

native-image3

TL;DR

If your project development budget and time are too low and you don’t focus primarily on the performance then you can go for React Native framework.

If your project definition is not too complex and doesn’t require complex animations and hardware integrations then you can go for React Native framework.

If you’re a beginner in mobile app development then you must choose Native framework.

Also, if your developers are expert in Native Framework Languages then you must choose Native framework. Because the project delivered by developers will have high performance, high scalability, and better user experience.
If your app needs to use advanced topics like AR-VR, third party libraries, advanced APIs or internet of things, you should definitely pick the Native framework.

In most other cases, I think Native framework is the best choice for project development to deliver advanced functionality with higher efficiency.

Conclusion

I believe, both the approaches (Native vs React Native) have their specific concerns. In some cases, Native framework is best choice and in some cases, it is better to choose React Native framework (Hybrid Frameworks). It is recommended that the approach you consider for your project should be chosen based on key factors like, development time, performance, scalability, application scope etc.

Be productive and smart while choosing project development approach because, the choice is yours.

All About Apple’s Latest Event “Gather around”

$
0
0

Overview

Every September Apple comes with new hardware and software products, and improve its current ones. Apple never miss a chance to surprise their fans and this year Apple comes with 3 brand new iPhones and Series 4 watch along with MacOS Mojave, iOS 12 and WatchOS 5. Since I am a big Apple fan I am very excited about new iPhones and Watch, I can’t wait to try my hands on brand new iPhones and Watch.

So, let’s find out what we got in 2018 from Apple 

Apple Event image6

Introducing Brand new Iphones

1) iPhone XS

All new iPhone Xs feature and software

  • Made with stainless steel and gold finishing
  • 5.8 inch super retina display with IP68 for water resistance and dust resistance
  • 2436 x 1125 resolution which has 2.7 million pixels and 458 pixels per inch. It’s the highest-quality display Apple has ever released. It also supports high dynamic range (HDR) and comes with Dolby Vision support.
  • It has A12 Bionic 64-bit chip and Next-generation Neural Engine
  • Battery could Lasts up to 30 minutes longer than iPhone X
  • Comes with the world’s most personal and secure mobile operating system iOS 12
  • Very attractive colours like Gold, Silver, Space Gray
  • Available with storage of 64GB, 256GB, 512GB with price $999, $1,149, $1,349 respectively

You can pre-order your iPhone Xs from sept 14th 2018

Apple Event iPhone-Xs

2) iPhone Xs Max

Apple Event iPhoneXs-Max

All new iPhone Xs Max feature and software

  • Made with stainless steel and gold finishing
  • 6.5 inch super retina display with IP68 for water resistance and dust resistance
  • 2688 x 1242 resolution which has 3.3 million pixels and 458 pixels per inch. It’s the highest-quality display Apple has ever released. It also supports high dynamic range (HDR) and comes with Dolby Vision support.
  • It has A12 Bionic 64-bit chip and Next-generation Neural Engine
  • Battery could Lasts up to 90 minutes longer than iPhone X
  • Comes with the world’s most personal and secure mobile operating system iOS 12
  • Very attractive colours like Gold, Silver, Space Gray
  • Available with storage of 64GB, 256GB, 512GB with price $1,099, $1,249, $1,449 respectively

You can pre-order your iPhone Xs Max
from sept 14th 2018

These two iPhone looks so stunning and have most powerful hardware inside it

  • On the sound side, Apple has added stereo sound with a wider stereo sound to improve audio.
  • Apple has boosted Face ID’s performance to make it more quickly see your face.
  • The A12 Bionic chip – The chip is the first in the smartphone market to run on a 7nm process, which makes it smaller and more powerful than its predecessor.
  • Both iPhones have improved cameras that come with 12-megapixel wide-angle cameras and a 12-megapixel telephoto lens. On the front, there’s a 7-megapixel sensor that can add more depth when you take selfies.
  • Apple also focus to save environment so they designed most hardware recyclable to reduce environmental impact
  • Apple’s new iPhone XS and XS Max will be available for pre-order on Friday, September 14 and will hit store shelves on September 21.

3) iPhone XR

It’s not end here, Apple also surprised their fan with brand new iPhone which has cool and has stunning colour variants which called iPhone XR

All new iPhone XR feature and software

  • Most advance LCD ever in smartphone
  • 6.1 inch true tone liquid retina HD display with IP67 for water resistance and dust resistance
  • 1792 x 828 resolution at 326 ppi
  • It has A12 Bionic 64-bit chip and Next-generation Neural Engine
  • Battery could Lasts up to 90 minutes longer than iPhone 8-Plus
  • Comes with the world’s most personal and secure mobile operating system iOS 12
  • Very attractive colours like Red, Yellow, White, Coral, Black, Blue
  • Available with storage of 64GB, 256GB, 512GB with price $749, $799, $899 respectively

You can pre-order your iPhone XR from oct 19th 2018

Apple Event iPhone-Xr

4) iWatch series 4

Apple had became the number #1 in the era of smart watch with cellular enabled watch as iWatch series 3, and Apple still continue that with the iWatch series 4 with more advance feature which are not available with any other smart watch till now

Apple Event iWatch_serise-4

All new iWatch series 4 feature and software

  • Series 4 is available in stainless steel or aluminium
  • The display on the Series 4 is 30 percent larger than series 3 40mm near-edge-to-edge design
  • Series 4 also has a next-gen accelerometer and gyroscope with up to 2x dynamic range and up to 32 g-forces
  • Digital Crown now gets haptic feedback
  • Series 4 is 50 percent louder, and the microphone has been moved away from the speaker to reduce echo.
  • Built-in fall detection that can alert emergency services and SOS contacts immediately.
  • Additions include a ‘low heart rate’ notification, alerts for atrial fibrillation, and electrocardiogram (ECG)
  • Touted to be swim-proof, has Bluetooth v5.0, and has an optical heart sensor.
  • New watch faces including a new modular face also New faces based on fire, water, and vapour effects were also introduced.
  • The Apple Watch Series 4 has been priced at $399 for the GPS-only, non-Cellular variant and $499 for the variant with both GPS and Cellular capabilities.

The latest Series 4 will be available for pre-orders starting Sept 14 2018

Here is comparison and full specification of all three latest iPhones

Specification iPhone XS iPhone XS Max iPhone XR
Display 5.8-inch, 2436×1125, 458 ppi 6.5-inch, 2688×1242, 458 ppi 6.1-inch, 1792×828, 326 ppi
Contrast ratio 1,000,000:1 1,000,000:1 1400:1
Processors A12 Bionic 64-bit A12 Bionic 64-bit A12 Bionic 64-bit
Identification Face ID Face ID Face ID
Rear camera – 1 12MP, ƒ/1.8 12MP, ƒ/1.8 12MP, ƒ/1.8
Rear camera – 2 12MP, ƒ/2.4 12MP, ƒ/2.8
Video recording 4K at 60fps, 1080p at 60fps 4K at 60fps, 1080p at 60fps 4K at 60fps, 1080p at 60fps
Front camera 7MP photos, 1080p video 7MP photos, 1080p video 7MP photos, 1080p video
FaceTime Over Wi-Fi or cellular Over Wi-Fi or cellular Over Wi-Fi or cellular
Assistant Siri Siri Siri
Navigation GPS, GLONASS, Galileo, and QZSS GPS, GLONASS, Galileo, and QZSS GPS, GLONASS, Galileo, and QZSS
Connectivity Bluetooth 5.0, NFC Bluetooth 5.0, NFC Bluetooth 5.0, NFC
Talk time Up to 20 hours Up to 25 hours Up to 25 hours
Internet use Up to 12 hours Up to 13 hours Up to 15 hours
Video playback Up to 14 hours Up to 15 hours Up to 16 hours
Audio playback Up to 60 hours Up to 65 hours Up to 65 hours
Height 5.65 inches (143.6 mm) 6.20 inches (157.5 mm) 5.94 inches (150.9 mm)
Width 2.79 inches (70.9 mm) 3.05 inches (77.4 mm) 2.98 inches (75.7 mm)
Depth 0.30 inch (7.7 mm) 0.30 inch (7.7 mm) 0.33 inch (8.3 mm)
Weight 6.24 ounces (177 grams) 7.34 ounces (208 grams) 6.84 ounces (194 grams)
SIM card Nano-SIM Nano-SIM Nano-SIM
Connector Lightning Lightning Lightning
Colors Gold, Silver, Space Gray Silver, Gold, Space Gray Red, Yellow, White, Coral, Black, Blue

Price and Storage

iPhone XS iPhone XS Max iPhone XR
64GB USD $999, INR ₹99,900 USD $1,099, INR ₹1,09,900 USD $749, INR ₹76,900
128GB USD $799, INR ₹81,900
256GB USD $1,149, INR ₹1,14,900 USD $1,249, INR ₹1,24,900 USD $899, INR ₹91,900
512GB USD $1,349, INR ₹1,34,900 USD $1,449, INR ₹1,44,900

Creating and understanding Siri Shortcut – Swift 4.2

$
0
0

Overview

In this blog, we will cover all the initial steps and theory that we should be aware and know about. Before starting to create any Siri shortcuts there are some points we are going to discuss today.

Minimum Requirement:

    • Xcode 10 (beta or above)
    • iOS 12 (beta or above)

We will create a project of the name “DemoSiriShortcut”. In it, we will perform “Siri shortcuts” for sending and requesting money. It will be simple, no API or real process is held only by using UserDefaults. Just to understand Siri Shortcut, we will add or subtract the value of UserDefaults.

Note: We are not integrating Siri; we are going to create the shortcut of the process, that we are going to access through Siri by adding a key-word to say. If you want to integrate siri in the app, you can go through another blog.

What is Siri Shortcut?

Siri shortcut is a shortcut of the process that we perform regularly in the app. It helps us to do the same by just saying a keyword that we have provided while adding that particular process to the Settings > Siri Search or Shortcuts app. Process can be performed directly or with the confirmation, in the background or with opening the app. It will also give a suggestion in the search, and spotlight and apple watch face too. You can access these shortcuts through iPhone, Apple watch, HomePod, and CarPlay too.

Available Siri shortcut domains

These are the types of activities(domains of intents) developer can use in the app:

  1. VoIP calling 📞
  2. Messaging 📫
  3. Notes and list 📝
  4. Workout 🏋🏻‍♂️
  5. Payment 💰
  6. Visual Codes 👀
  1. Photos 🌃
  2. Ride Booking 🚖
  3. Car command 🚗
  4. Car play 🎵
  5. Restaurant reservation 🏨

All the above are the domains that has inbuilt methods and handler that can be used directly in the project with Siri integration. And if you want to create other than above domain or its inbuilt methods, you can use “Custom Intents” which need to define all the parameters, actions and responses we want to use in-app. As we are going to use in this project.

Creating Certificate to access Sirikit in app

As we are using Siri feature in the app, we need to create the certificate. It’s quite simple to create the certificate. We should have our registered email id to use Siri. We need app bundle id and CSR to create the certificate. In this certificate, we have to enable Siri and app group capability of the app. You can create certificate in the Apple Developer Link.

After creating certificate, we need to add to the project in the Target > General > Signing(debug) and Signing(release) > Provisional profile > import profile. And selecting downloaded certificates through your id.
Then Build(⌘ + B) your project.

Now go to the capability section and turn on Siri and app group and add app group id as “group.(bundle identifier)” in the certificate.

Now you are ready to use Siri-shortcut in the project.

Siri Shortcut image2

Sirikit(Intents Extension)

Intents help you to pass data from Siri to your app, even in the background, without opening the application. And allow you to do the activity that is related to that specific domain that you have provided. Like for the payment, it helps you to send or receive amount to the phone that you had said to Siri.
Payment has a protocol like INRequestPaymentIntentHandling, which helps you to handle all the parameter you need to use to request the payment. It also helps you to create UI that you need to show in the Siri. All the possible case are covered in this protocol, still if you want to do some custom process you can use “Intent Definition File” as we are going to use.
For the Intent Extension file,

Files > New > Target > iOS(tab) > Application Extension (section) > Intent Extension
Then, Next > (Give file name) > Finish

Siri Shortcut image8

Note:- Make sure the data that you want to use in the Siri and app should have only one path like one API or UserDefaults so that both the data don’t have the different value or obj. And it should be stored somewhere. No simple variable will work, as when you open your app every variable will be initialized again.

After creating it you will find the folder with the name you have given. IntentHander and Plist file will be in it. Now, Go to the target and check if both the target’s capabilities section has activated “App Groups” and have the same id as added to the certificate’s app group id.

Now you will get a .entitlement file in both the target. Check if there is any id inside the App group list. (if not then add app group id)

We will come to IntentHandler and Plist file after next step.

Custom Intent Definition & Response to that shortcut

We will add custom intents file as we want try custom method, but it is almost same only new file will be added which has all the information regarding what to get and pass to/from user.
In this file we have to provide each and every action that we are going to give shortcut. In this project I have 2 actions – Sending and Requesting money action. I will add 2 intents.

To add the file File > New > File >In the Resource section > Select – Siri Intent Definition File > (Give file name) > Create

Siri Shortcut image1

Now on Left Bottom Side of the file you click + sign > New Intent

Give the name of the intent as particular process you are adding in the file.

Siri Shortcut image4

As you can see in the above screenshot, there are certain words you need to understand, so here these are the definitions of them.

  1. Custom Intent
    • Category: Type of process you need to perform
    • Title: Heading of the process
    • Description: Small description on the process
    • Confirmation: Whether this process need user’s confirmation or can be done directly
  2. Parameter
    • Parameter: Variables(value) and its type you need from the user. It can be multiple. Provide all the variable in the section.
  3. Shortcut Types
    • Parameter Combination: When you click + button it will ask for the combination of the parameters you need to use in same type of the process
      • Title: Data you want to show in the Siri shortcut list title.You can use parameter in the title too
      • Subtitle: Data you want to show in the Siri shortcut list subtitle. You can use parameter in the subtitle too
      • Background: Is this process to be done in the background or you need to open the app.

Anyway you will see only Title and subtitle of shortcut parameter section everywhere. So make sure it helps the user to understand the shortcut, don’t confuse or mislead them.

Now, let’s come to IntentHandler and plist again.

First of all, let’s take plist file and add string as “(name of intents in intentDefinition file)” + “Intent” as postfix in the NSExtension > NSExtentionAttributes > IntentsSupports, that you have added in the intentDefinition file.

Siri Shortcut image7

In the intentHandler file, initially your file should have only these codes.

Siri Shortcut image3

Remove all other extra code if any.
Now, we will add all the handler methods that we have added in the intentDefiniton file.

// MARK: - Withdraw Intent Handling
extension IntentHandler: WithdrawDefinitionIntentHandling {
    func handle(intent: WithdrawDefinitionIntent, completion: @escaping (WithdrawDefinitionIntentResponse) -> Void) {
        // definition here
    }
    func confirm(intent: WithdrawDefinitionIntent, completion: @escaping (WithdrawDefinitionIntentResponse) -> Void) {
        // definition here
    }
}

We are going to use only handle method right now. Confirmation method can be used to check in this process is ready for the next step or not. So let us move forward and give proper definition for all the possible condition and response in the handling method.
Like I have done for the withdraw process.

// MARK: - Withdraw Intent Handling
extension IntentHandler: WithdrawDefinitionIntentHandling {
    func handle(intent: WithdrawDefinitionIntent, completion: @escaping (WithdrawDefinitionIntentResponse) -> Void) {
        if let amount = intent.amount?.intValue {//1. Getting the amount to be withdraw
            if let  newBalance = PaymentDetails.withdraw(amount: amount) {//2. Taking new balance by invoking the method to withdraw

                //3.Creating response
                let response = WithdrawDefinitionIntentResponse(code: WithdrawDefinitionIntentResponseCode.successWithAmount , userActivity: nil)
                //4. As we have to give new balance in the response
                response.availableBalance = NSNumber(value: newBalance)
                completion(response)
            } else { // If your balance is less then the amount to be withdraw then error
                let response = WithdrawDefinitionIntentResponse(code: WithdrawDefinitionIntentResponseCode.failDueToLessAmount , userActivity: nil)
                response.availableBalance = NSNumber(value: PaymentDetails.checkBalance()!)
                response.requestAmount = NSNumber(value: amount)
                completion(response)
            }
        }
    }
}

You will be thinking:

Siri Shortcut image5

No..! Not at all. It is very simple and easy to understand. Let’s discuss all the important keywords used in the above code:

  1. WithdrawDefinitionIntentHandling : It is the custom protocol that is created when we have added new intent in the intentDefinition file. So extend it and use it’s handle and confirmation method.
  2. WithdrawDefinitionIntentResponse : It is type of response for withdraw intent.
  3. WithdrawDefinitionIntentResponseCode.failDueToLessAmount : It is type of the response have added in the response of intentDefinition file.
  4. completion(response) : When all the handling is done then we give the response in this block. It can be success or failure or can be custom that you have added.

Donating shortcut

It passes the data to Siri that we have provided to create the shortcut. All this detail should be filled such that the user can easily use. It does everything inline and in the background, according to the task, we have donated. You can add button for the user to add shortcut or add it when the process is executed successfully.

// 1. Intent variable
let intent = DepositeDefinitionIntent()
intent.amount = NSNumber(value: amount) // amount is entered by the user in the textField

//2. Interaction variable
let interaction = INInteraction(intent: intent, response: nil)

 // 3. Donating interaction
 interaction.donate { (error) in
      guard error == nil else {
           print("Request problem : \(String(describing: error?.localizedDescription))")
           return
                }
     // If no error then your shortcut has been successfully added to shortcut 
     print("Request Intent Donated")
 }

Run the app. And your Siri shortcut app is ready to use.

Hint: If you want to make ur shortcut more reliable, make siri suggest it and view in the search list and in the lock screen, you can add NSUserActivity in it.

Detecting 3D Objects using ARKit 2.0

$
0
0

Overview

  • In iOS 12, Apple introduced ARKit 2.0 with some extreme AR features like Multiuser AR experience, 2D image detection and 3D object detection.
  • In this tutorial, I will show you how to scan real world object using apple demo and create object reference file. Use this Object file in our app for detecting that object.
  • In the first part Image recognition and tracking using ARKit 2, we write tutorial on image detecting and tracking,  that allows you to detect and track images that you add to the app. This is limited to images in two dimensions, in this tutorial we demonstrate you how to detect 3D real world object in your ARKit app.

Prerequisites:

  • Xcode 10 (beta or above)
  • iOS 12 (beta or above)
  • iPhone 6S (Apple A9 chip or above)
  • Object reference files (.arobject files of your real objects)

How to get or create object reference file of your real world object :

    • There are two ways to create .arobject file
      • Create separate app for scanning real world object, apple provides api for that.
      • User apple demo app, and quickly scan your object and export it.
    • This is apple demo app Scanning and Detecting 3D Objects for quickly scanning your real world object and you can use for your app.
    • Download and Run this demo in real device (iPhone 6S or above).
    • Before scanning object you should know about which object can be easily scanned, below is example of good and poor objects.
    • Metallic, transparent, refractive and class material type object do not work properly.
    • Rigid object, texture rich, no reflective, no transparent are good objects to track, also keep in your mind that your environment have good lighting for scanning and detecting objects.

Detecting 3D image2

  • How to scan object using apple demo :
  • After scanning object, you will test and share file to your mac machine via airdrop and use this file in your 3D object detection app.

How to use object reference file your app?

    • Create new Project and select Augmented reality app

Detecting 3D image3

    • Add ARObject file into your app, select Assets.xcassets, tap (+) plus button from the bottom of the screen.
    • Now select New AR Resource Group and change name to gallery then drag and drop ARObject reference file.
    • You can also add multiple ARObject file into AR Resource group.
    • See below video (video file name : add_object_file):

Detecting 3D image4

Detect real world object :

  • Create ARWorldTrackingConfiguration object and load your gallery assert group.
  • Assign refObjects to configuration.

let configuration = ARWorldTrackingConfiguration()
   guard let refObjects = ARReferenceObject.referenceObjects(inGroupNamed:"gallery",bundle: nil) else {
       fatalError("Missing expected asset catalog resources.")
   }
   configuration.detectionObjects = referenceObjects
   sceneView.session.run(configuration)

  • Now time to scan your object, when object is successfully detected below delegate is called.

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    if let objectAnchor = anchor as? ARObjectAnchor {
         // Object successfully detected.              
    }
}

Create AR interaction :

  • Now interact with your detected object, we will show one arrow on top of the object, that indicates here is your object.
  • Add arrow scn file into art folder, and load that file when object is detected.

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        if let objectAnchor = anchor as? ARObjectAnchor {
            let translation = objectAnchor.transform.columns.3
            let pos = float3(translation.x, translation.y, translation.z)
            let nodeArrow = getArrowNode()
            nodeArrow.position = SCNVector3(pos)
            sceneView.scene.rootNode.addChildNode(nodeArrow)
        }
    }

    func getArrowNode() -> SCNNode {
        let sceneURL = Bundle.main.url(forResource: "arrow_yellow", withExtension: "scn", subdirectory: "art.scnassets")!
        let referenceNode = SCNReferenceNode(url: sceneURL)!
        referenceNode.load()
        return referenceNode
    }

Conclusion : –

  • This tutorial we covered how to scan and detect real world object using ARKit 2.0.
  • According google search augmented reality top trending technology in mobile, In 2020 AR reach $120 billion market value according to digi-capital

Detecting 3D image1

Whatsapp bots to grow your business

$
0
0

Introduction

WhatsApp has a massive user base of 1.5 billion people and consequently, businesses have a keen eye on WhatsApp until now small business have WhatsApp groups and putting manpower on sharing offers, deals and other customer connects initiative.

In 2017, WhatsApp did a pilot program where they offered limited ability to some company to send messages. Basically, some receive and send API was provided. Eg. Bookmyshow.

Business needs to be the place their clients are and popular messaging applications are those spots today. The business now already using officially launched API of popular messaging apps like Facebook Messenger, Skype, telegram, Line and Viber But very recently, there was one notable holdout—WhatsApp.

More than 1.5 billion people globally use WhatsApp every day to talk to friends and family as well as for work and collaboration, and in August this year 2018, WhatsApp launched the highly anticipated WhatsApp Business API. Businesses have been clamouring for an authoritatively supported approach to communicate with WhatsApp clients throughout recent years. The excitement is more than justified.

Prerequisite

  • Dialogflow
  • Twilio

Getting Started

This would be “break the ice” on the tremendous demand of the public to connect a conversation flow or have a chatbot functionality with WhatsApp. For now, there is no Publicly available WhatsApp API to use so in this article we are going to use Twilio WhatsApp API that provide the sandbox version to test our bot and then we can make it public.
This would be a simple prototype for getting response from dialogflow and sending text messages.

Twilio & Dialogflow the integration

  • Create an account Create Project
  • Generate Number in WhatsApp Sandbox
  • You will get one QR code in next step, Scan that with your device, that’s it your sandbox is now active
  • Create Agent In Dialogflow
  • Turn on the Small talk srom side panel
  • Copy the Account ID and Token from Twilio dashboard and paste in dialogflow integration tab as shown in below images

Whatsapp bots image3

Whatsapp bots image2

Copy the request URL from dialogflow integration tab and paste in “a message comes in the input box and save”

Whatsapp bots image1

Now you will be able to do conversation with dialogflow agent if not there is something wrong please check the process again or you can comment your queries here.

Whatsapp bots image8

Whatsapp bots image7

Conclusion : –

Now you must be having basic understanding of how to connect dialogflow with twilio and twilio to whatsapp. It’s the sandbox version though you can apply for making it live from twilio console.

UIKit Dynamics

$
0
0

Overview

Hello everyone, today we are going to learn about UIKit Dynamics. I hope you all know what we can do with animations, UIKit Dynamics is somewhat that helps you to apply physics-based animations to your views. Even with less efforts and code you can easily create amazing animations.

Dynamic Animator

Let’s get started with UIDynamicAnimator, it is an object with which you can add physics-related capabilities and animations for the underlying UIDynamicItem.

Some of the methods are very useful and necessary to apply UIDynamics:

init(referenceView view: UIView)			// To initialize animator
addBehavior(_ behavior: UIDynamicBehavior)	// To add a behavior
removeBehavior(_ behavior: UIDynamicBehavior)	// To remove particular behavior
removeAllBehaviors()						// To remove all attached behaviors

UIDynamicItem

It is set of methods that helps any object to being eligible for participation in UIKit Dynamics.

  1. UIDynamicItemBehavior: Its is a class which helps to configure your animations for one or more dynamic items.
  2. UIDynamicItemGroup: It groups together to manipulate group of dynamic items & treat them as a single unit.

Without wasting much time let’s start the fun part….coding!

Here is the listing of UIKit Dynamics variables that we will use in entire project.

var dynamicAnimator   : UIDynamicAnimator!
var gravityBehavior   : UIGravityBehavior!
var collisionBehavior : UICollisionBehavior!
var bouncingBehavior  : UIDynamicItemBehavior!
var pushBehavior: UIPushBehavior!
var snapBehavior: UISnapBehavior!
var attachmentBehavior : UIAttachmentBehavior!

2.1 UIGravityBehavior

Use: To apply gravity-like force to object.

UIKit Dynamics Gravity

Here’s the code how you can add Gravity behaviour to squareView

dynamicAnimator = UIDynamicAnimator(referenceView: self.view)
gravityBehavior  = UIGravityBehavior(items: [squareView])
dynamicAnimator.addBehavior(gravityBehavior)
gravityBehavior.gravityDirection = CGVector(dx: 0, dy: 1)    // For downward

//gravityBehavior.gravityDirection = CGVector(dx: 0, dy: -1) // For upward
//gravityBehavior.gravityDirection = CGVector(dx: 1, dy: 0)  // For left-side
//gravityBehavior.gravityDirection = CGVector(dx: -1, dy: 0) // For right-side

Only setting gravity behaviour is not enough, because by applying this code the squareView falls below the screen. To fix this we have to set some boundaries where squareView will collide and do not exit the screen.
In below code we have given screen boundary to collide with.

collisionBehavior = UICollisionBehavior(items: [squareView])
collisionBehavior.translatesReferenceBoundsIntoBoundary = true
dynamicAnimator.addBehavior(collisionBehavior)

You can also make object bounce by adding elasticity to its behaviour.

bouncingBehavior = UIDynamicItemBehavior(items: [squareView])
bouncingBehavior.elasticity = 0.70
dynamicAnimator.addBehavior(bouncingBehavior)

Here’s some more properties of UIGravityBehavior

gravityBehavior.angle = CGFloat(90 * (Double.pi/180)) // 90 Degree To RADIANS
gravityBehavior.magnitude = CGFloat(5)                // Gravity force
// OR
gravityBehavior.setAngle(CGFloat(180 * (Double.pi/180)), magnitude: 0.2)

UIKit Dynamics GravityBehavior

2.2 UIPushBehavior

Use: Applies a force to its dynamic items to change its position accordingly to its mentioned direction.
UIKit Dynamics Push

To apply force which push the squareView up-side, we need to define the direction as well as with what force/magnitude it pushed.

pushBehavior = UIPushBehavior(items: [squareView], mode: .continuous)
pushBehavior.magnitude = 0.1                         // Speed/force
pushBehavior.pushDirection = CGVector(dx: 0, dy: -1) // To up side
pushBehavior.setTargetOffsetFromCenter(UIOffset(horizontal: -10, vertical: 0), for: squareView)                                     // To spin clockwise
dynamicAnimator.addBehavior(pushBehavior)

UIKit Dynamics PushBehavior

2.3 UICollisionBehavior

Use: To manage collisions of dynamic items with each other within the specified boundaries are performed using this behaviour.

UIKit Dynamics Collision
How can we make collisions? Let’s do it by falling multiple views onto each other for fun.

// To fall the views we will add UIGravityBehavior
// To collide that views we are going to use UICollisionBehavior
gravityBehavior = UIGravityBehavior(items: squareViews)
collisionBehavior = UICollisionBehavior(items: squareViews)
collisionBehavior.collisionMode = .everything
// collisionBehavior.collisionMode = .boundaries 
// collisionBehavior.collisionMode = .items
dynamicAnimator.addBehavior(gravityBehavior)
dynamicAnimator.addBehavior(collisionBehavior)

From above code all the views will fall and exit the screen, so we need to set boundary where they can collide.

// It sets a boundary of the whole refrence that mentioned in dynamicAnimator
collisionBehavior.translatesReferenceBoundsIntoBoundary = true

Or you can just boundary one by one

// To set bottom of the screen as a boundary
collisionBehavior.addBoundary(withIdentifier: "bottomBoundary" as NSCopying, from: CGPoint(x: 0, y: self.view.frame.size.height), to: CGPoint(x: self.view.frame.size.width, y: self.view.frame.size.height))

At last you can set boundary using:

collisionBehavior.setTranslatesReferenceBoundsIntoBoundary(with: UIEdgeInsets(top: selfFrame.maxY, left: selfFrame.maxX, bottom: selfFrame.maxY, right: selfFrame.maxX))

UIKit Dynamics CollisionBehavior

2.4 UISnapBehavior

Use: It tends to behave like a spring, which starts with damping motion and settles at a specific point over the time.
As we all know snapping means performing something quickly or suddenly. This behavior do the same, dynamic item snapped from its position to a snap point as per its damping value.
UIKit Dynamics Snap

Let simply snap the three buttons, create a collection outlet for the buttons and add snap to each buttons by following code:

for button in buttons {
    let originalPosition = button.center
    button.center = CGPoint(x: self.view.frame.width / 2, y: -button.frame.height)
    snapBehavior = UISnapBehavior(item: button, snapTo: originalPosition)
    snapBehavior.damping = 0.2      // amount of oscillation of dynamic item
    dynamicAnimator.addBehavior(snapBehavior)
}

Buttons will arrive from the top of the screen and snaps to the center point of the button’s position. Damping plays the main role, because its value indicates how many oscillation does dynamic item performs during snapping.
UIKit Dynamics SnapBehavior

2.5 UIAttachmentBehavior

Use: It is an object that binds a relationship between an object and an anchor point, in our scenario it’s going to be a relationship between view and an anchor.

UIAttachmentBehavior behaves in the same way as pendulum. An anchor and the other object that oscillates (move or swing) with respect to anchorPoint.

UIKit Dynamics Attachment
We need squareView to fall and swing it with respect to anchorView.

gravityBehavior = UIGravityBehavior(items: [attachedView])
attachmentBehavior = UIAttachmentBehavior(item: attachedView, attachedToAnchor: anchorView.frame.origin)
attachmentBehavior.damping = 0.5
dynamicAnimator.addBehavior(gravityBehavior)
dynamicAnimator.addBehavior(attachmentBehavior)

To perform amazing animation you can use more properties to UIAttachmentBehavior:

attachedBehaviorType	// type of attachment behavior
frequency			// oscillation for the attachment behavior
length			// Distance in points b/w to attached objects
frictionTorque		// Amount of force needed to overcome rotational force around anchorPoint
attachmentRange		// For range of motion of attachment behavior

UIKit Dynamics AttachmentBehavior

Conclusion : –

Thank you for reading this article, I hope you have learned simple yet cool animations using UIDynamics, you can download project from Github also feel free to contribute in this project.

Progressive Web Apps: The Imminent Mobile Experience?

$
0
0

Introduction

Progressive web Apps (PWA) – The next step you should take now as a web developer.

Ever wonder that you can build a Mobile application that works like a pro and feel like a native Android or IOS application?

Being a web developer, what are you using to build web apps? HTML, CSS, Javascript? By using same you can build the mobile application.

Progressive web app image2Progressive web apps mean progressively enhancing your web application by using modern web APIs.

Simply, It’s just a web page that has taken all the necessary “vitamin” to behave like native mobile Apps. It is the combination of best apps and best of the web.

“A Progressive Web App uses modern web capabilities to deliver an app-like user experience.” – Progressive Web Apps

What Progressive apps can give you?

Loading speed, usability and readability does matter, right? Well, PWA is the Solution. It’s the myth around us that User will visit your website from their mobile browser and install your application from Play store/App Store.

  • Reliable – Load as fast as possible and you will never show the “dinosaur”, even in worst network conditions.
  • Fast – Respond quickly to user interactions with smooth animation.
  • Engaging – Feels like a native app on the device, with a great user experience.

Let’s see the timing and size comparison with native and PWA apps,

  • Go to play Store
  • Search for App
  • Click Install
  • Accept various permission
  • Launch app
  • Sign up
  • Use the Application
  • Open Website
  • Add to home Screen
  • Open app from home Screen
  • Use the Application

You can see the native apps take time to install and also require more free space while PWA is simple to install and use less space.+

Now let’s see what makes PWA more powerful,

  1. Service Worker
  2. Webapp manifest
  3. Responsive design
  4. Fetch API
  5. Caching Files
  6. IndexedDb
  7. Push Notification
  8. Morden Web APIs

By using the above strategy you can make your any website progressive web app where it can be the simple website or single page application (SPA). Angular, React and VUE also give support to make your web app PWA.

Why your business should care about PWA?

  • Fast user experience
  • Works offline
  • Supports background sync
  • Doubleclick by Google found, 53% of users will abandon a site if it takes longer than 3 seconds to load! And once loaded, users expect them to be fast—no janky scrolling or slow-to-respond interfaces.
  • You can update your web App and PWA will automatically enhance that update
  • Sends notification even if you are not using App (user engagement)

Case Study found on, George.com is a leading UK clothing brand, part of ASDA Walmart. After upgrading their site to a Progressive Web App (PWA), the brand saw a 31 percent increase in mobile conversion.

  • 3.8x – Faster average page load time
  • 2x – Lower bounce rate
  • 31% – Increase in Conversion Rate
  • 20% – More page views per visit
  • 28% – Longer average time on site for visits from Home screen

Challenges?

Cross Browser Support: Chrome supports all features of PWA and other browsers getting better.

Limited Legitimacy: There is no official play store/App store to upload and download the app but Microsoft store released the features to upload PWA.

Cross Application Communication: Like native apps, PWA can communicate with other installed apps in mobile.

Conclusion : –

We can not say that Progressive Web Apps will kill the Native App market but PWAs are capable enough to attract users to use more than native apps.

According to Henrik Joreteg, “PWA is the single biggest thing to happen on the mobile web since Steve introduced the iPhone!”


Event Loop – Javascript

$
0
0

Introduction

In javascript we have an amazing quantity of libraries, tools and such things that make our work easier, but did you ever try to understand how it is working under the hood? Javascript is a widely spread language so many of us got attracted and want to use higher level tools without understanding its deep working which is not the right way. You should always know how your code actually got executed.

Today we will see how callbacks will be executed in javascript. We all know javascript is single threaded. We also know that it supports asynchronous behaviour. But this behaviour is not part of javascript. This asynchronous behaviour access through browser APIs which are built on the top of javascript.

Simple Javascript – Example

First let’s see how simple javascript code get executed.

Event Loop image1

You can see it is running line by line. Javascript is single threaded so
One thread == One call stack == One thing at a time and so we have one call stack.

Event Loop

 

Event Loop image2

Heap –

It is just a memory allocation thing.

Call stack –

It is a data structure which records, where in the program we are. If we step into the function, we push something into the stack and if we return from the function, we pop of from the stack.

Web APIs –

They are not the part of javascript but are they are built on top of the core javascript language, which gives you extra features to use in your javascript code. For example setTimeOut gives you some simple features that you can prevent some code from getting executed for defined time.

Callback queue –

When you call setTimeOut or any async operation, Web APIs will add them to callback queue. This is also a data structure. It’s work is to store functions in correct order

Event Loop –

This is a constantly running process which checks if call stack is empty or not? If yes then it will execute the function from callback queue. So remember, callback function only get executed if call stack is empty. Always make sure that all callback functions should get chance to enter in call stack.

Event Loop – Example

 

Event Loop image3

If you set time to 0 in setTimeOut, it will still run on the last. This time does not stand for time delay after which function will execute, this time is the minimum time function need to wait.

So, this is it for now. Keep doing meaningful code. Keep searching for deep working of any functionality you use.

Abstracts of ‘Made by Google’ 2018 event

$
0
0

Introduction

google_event_image9

Are you a tech-geek and do you love knowing about latest tech-trends and announcements? You have arrived at the correct place.

Google organised an event called Made by Google on October 9, 2018 where Google’s new devices including Pixel 3 were announced. I got the complete event covered for you.
Sit back, relax and enjoy the abstracts of the event 🙂

Google Pixel 3

google_event_image5

As the hard-bitten battle going on between smartphone and their company for their smartphone price and specification and features. Google have also jumped in it with it’s new smartphone which was most awaited this year Google Pixel 3 and Pixel 3 XL, It was the limelight of the event which was held by google on 10 October 2018.

Pixel 3 and Pixel 3XL configurations

Key Features

Model Name Google pixel 3 Google pixel 3XL
RAM 4GB 4GB
ROM 128GB 128GB
Display Size 13.97 (5.5inch) 16.0 cm (6.3 inch) QHD+ Display
Display Type Full Hd Full Hd
Battery 2915 mAh 3430 mAh
Processor Qualcomm Snapdragon 845 64-bit Processor Qualcomm Snapdragon 845 64-bit Processor
Color 1.Just Black, 2.Cleary White, 3.Not Pink 1.Just Black, 2.Cleary White, 3.Not Pink

General

Sim Single Single
Hybrid Slot No No
Touch Screen Yes Yes
OTG compatible No No
Quick Charging Yes Yes
In The Box Handset, USB Type-C 18W Adaptor with USB-PD, C-C Cable (USB 2.0), SIM Tool, Quick Switch Adapter, 3.5mm to USB-C Headphone Adapter (aka Headphone Adapter), USB Type-C Earbuds Handset, USB Type-C 18W Adaptor with USB-PD, C-C Cable (USB 2.0), SIM Tool, Quick Switch Adapter, 3.5mm to USB-C Headphone Adapter (aka Headphone Adapter), USB Type-C Earbuds
Face Unlock yes yes
3D face recognition yes yes

Display Features

Display Size 13.97 cm (5.5 inch) 16.0 cm (6.3 inch)
Resolution 2160 x 1080 pixels 2960 x 1440 pixels
Resolution Type FHD+ QHD+
GPU Adreno 630 Adreno 630
Display Colors 16.77M 16.77M
Body Type Glass Glass
USB-Type-c yes yes

OS & Processor Features

Operating System Android Pie 9 Android Pie 9
Processor Type Qualcomm Snapdragon 845 64-bit Qualcomm Snapdragon 845 64-bit
Processor Core Octa Core Octa Core
Primary Clock Speed 2.5 GHz 2.5 GHz
Secondary Clock Speed 1.6 GHz 1.6 GHz

Memory & Storage Features

Internal Storage 64 GB & 128 GB 64 GB & 128 GB
RAM 4 GB 4 GB

Camera Features

Primary Camera 12.2MP 12.2MP
Primary Camera Features 1.4micrometer, Autofocus + Dual Pixel Phase Detection, Optical + Electronic Image Stabilization, Spectral + Flicker Sensor Combo, f/1.8 Aperture, Field of View – DFoV (76Degree) 1.4micrometer, Autofocus + Dual Pixel Phase Detection, Optical + Electronic Image Stabilization, Spectral + Flicker Sensor Combo, f/1.8 Aperture, Field of View – DFoV (76Degree)
Secondary Camera 8MP + 8MP 8MP + 8MP
Secondary Camera Features Wide-angle and Telephoto Cameras, Wide-angle – f/2.2 Aperture and DFoV 107-Degree, Telephoto – f/1.8 Aperture and DFoV 75-Degree, Wide Angle Selfie + Natural Passive Authentication Wide-angle and Telephoto Cameras, Wide-angle – f/2.2 Aperture and DFoV 107-Degree, Telephoto – f/1.8 Aperture and DFoV 75-Degree, Wide Angle Selfie + Natural Passive Authentication
Full HD Recording Yes Yes
Video Recording Resolution 2160p 2160p
Dual Camera Lens Secondary Camera Secondary Camera

Connectivity Features

Supported Networks 4G VoLTE, 4G LTE, UMTS, GSM 4G VoLTE, 4G LTE, UMTS, GSM
Bluetooth Version 5 5
Wi-Fi Version 802.11a/b/g/n/ac (2.4G + 5GHz) 2×2 MIMO 802.11a/b/g/n/ac (2.4G + 5GHz) 2×2 MIMO
NFC yes yes

Other Details

SIM Size Nano SIM Nano SIM
Graphics PPI 443 PPI 523 PPI
Sensors Active Edge v2, Proximity / Ambient Light Sensor, Accelerometer / Gyrometer, Magnetometer, Pixel Imprint – Back-mounted Fingerprint Sensor for Fast Unlocking, Barometer, Android Sensor Hub, Advanced X-axis Haptics for Sharper / Defined Response Active Edge v2, Proximity / Ambient Light Sensor, Accelerometer / Gyrometer, Magnetometer, Pixel Imprint – Back-mounted Fingerprint Sensor for Fast Unlocking, Barometer, Android Sensor Hub, Advanced X-axis Haptics for Sharper / Defined Response
Other Features Design – Metal Frame + Soft Touch Glass + Refined Iconic Shade, IP68 Water and Dust Resistant, Processor – Pixel Visual Core, Titan M Security Module, Charging – USB Type-C 18W Adaptor with USB-PD 2.0, 18W Fast Charging, Ports – USB Type-C, 3.1 Gen 1, Qi Wireless Charging, Google Cast, Location – Rest of World, Network – Intraband Contiguous Uplink CA CAT13, Google Assistant, Hearing Aid Compatibility – M3/T3 HAC Rating, AR/VR – Daydream-ready (Built for VR to Work with Google Daydream View Headset, Upto 10 Layer Support (1Gbps Max Download) – Cat 16 (Carrier Specific) Design – Metal Frame + Soft Touch Glass + Refined Iconic Shade, IP68 Water and Dust Resistant, Processor – Pixel Visual Core, Titan M Security Module, Charging – USB Type-C 18W Adaptor with USB-PD 2.0, 18W Fast Charging, Ports – USB Type-C, 3.1 Gen 1, Qi Wireless Charging, Google Cast, Location – Rest of World, Network – Intraband Contiguous Uplink CA CAT13, Google Assistant, Hearing Aid Compatibility – M3/T3 HAC Rating, AR/VR – Daydream-ready (Built for VR to Work with Google Daydream View Headset, Upto 10 Layer Support (1Gbps Max Download) – Cat 16 (Carrier Specific)

Battery & Power Features

Battery Capacity 2915 mAh 3430 mAh

Dimensions

Width 68.2 mm 76.7 mm
Height 145.6 mm 158.0 mm
Depth 7.9 mm 7.9 mm
Weight 148 g 184 g

Price in India (Approx.)

4GB Ram,64Gb Storage Rs. 71,000 Rs. 83,000
4GB Ram,128Gb Storage Rs. 80,000 Rs. 92,000

Google Pixel 3 is about AI + Hardware + Software

Google Pixel 3 is built with Exceptionally Beautiful Design of choices having a Great Touch and Light Feel in your hands. It is having the Matte finish on all with Glass Back emerging with two tone iconic design that helps to avoid fingerprints on back side.

  1. The camera with AI which won’t miss a shot

Google unveils the world’s best camera in smartphone. Smartest Camera Even Gets better with Pixel 3 and pixel 3XL, having a camera with AI Technology which won’t miss a shot.

– Pixel 3 camera having HDR+ mode with zero shutter speed and Pixel Visual Core, which makes camera even faster and smarter with new computational intensive features.

– The Best Aspect of Pixel 3 is Unlimited Storage of Photos and videos at original Quality.

  • Top Shot:

Pixel 3 camera comes with the feature of Top Shot by which camera is smart enough to know a good photo when it sees it. The camera will shoot lots of Shots before and after you hit the shutter button. It is with the help of machine learning and AI which will give the best shot out of it.

google_event_image11

  • Night Sight:

– Pixel 3 camera comes with the feature of Night Sight by which camera is smart enough to take great pictures in low light and in night.
– Low Light Picture is turned with the help of machine learning to great picture.

  • Group Selfie Camera:

google_event_image8

– Pixel 3 camera comes with 2nd Camera on front side that capture 184% more of your selfies than iphone 10s.
– Now you can fit everybody in your frame without selfie stick or long arm.

  • Playground:

google_event_image4

– Pixel 3 camera comes with the feature of Playground.

– Playground is AR Stickers to Life.

– Playground is built in Both front and rear Camera, with lot of new characters including avengers for celebration of Marvel’s 10 year anniversary.

– Playground characters are rendered to feel life like in the scene. They can even interact with each other. AR Character can even respond to the Action and facial Expression. They look like they are really in the scene with you.

– Playground is launching in Pixel 3 and soon will be rolled out in other Pixel phones.

  • Photo Booth Mode :

– Pixel 3 camera comes with the feature of Photo Booth Mode that automatically snaps photos with the smile or funny face in the frame with no need of shutter button.

  • Motion Auto-Focus :

google_event_image6

– Pixel 3 camera comes with the feature of Motion Auto-Focus by which camera is smart enough to handle focus of the moving object.

– You have to just tap on people or puppy or other objects which is in motion or won’t hold still, they will stay in focus as they move around in the frame.

– There is enhancement in Portrait Mode by which you can edit Depth of the field or can change focal subject of the photo or make the color Pop in your captured picture.

  • Google Lens With AI in Pixel Camera:

– Pixel 3 and Pixel 3XL camera comes with the built in feature of Google Lens which will also work without data connection. It is combination of Pixel Visual Core + Search in Computer Vision.

  1. Google assistant with duplex Technology

– Pixel 3 phones comes with Google Assistant with Duplex Technology. It will be the first phone to get the Duplex facility among the all.

– Pixel 3 will be the phone that answers itself

  • Call Screen Feature :

– Pixel 3 comes with Call Screen Feature which answers your call automatically.
Example: When you are at a meeting or dinner and you are unable to pickup the call and the call is important, don’t worry Just tap to screen call button on screen, your phone will answer for you and ask who is calling and why? Isn’t it just an awesome feature.

– Even a Conversation will be transcribed in the real time on the screen and you can decide to pickup or not .

– This Call Screen Feature will be launched with Pixel 3 this month and will be rolling out with other Pixel family from next month.

  • Flip To Shhh :

– Pixel 3 comes with Flip To Shhh feature you just have to turn around your phone on the table and with the help of AI and machine learning the phone will go in Flip To Shhh mode which is an easy gesture to minimize and mute the phone.

  1. Greater Security With Titan Chip

– Pixel 3 and Pixel 3XL camera comes with the built in Titan Chip which is big step on how we save user’s current data Credential, Disk Encryption, App Data and Data Integrity of OS.

– Titan Chip will bring greater safety to users secret data like password, credentials, App-data etc.

google_event_image1

  1. Google Pixel 3 Vs Pixel 2
Pixel 3 Pixel 2
Screen Size 5.5-inch AMOLED 5-inch AMOLED
Screen resolution 2,160 x 1,080 pixels (443 pixels per inch) 1,920 x 1,080 pixels (441 pixels per inch)
OS Android 9.0 Pie Android 8.0 Oreo (upgradeable to Android 9 Pie)
Storage space 64GB, 128GB 64GB, 128GB
RAM 4GB 4GB
Water resistance IPX8 IP67
Camera Single 12.2MP rear, dual 8MP and 8MP front camera Single 12.2MP rear, 8MP front camera
Processor Qualcomm Snapdragon 845 Qualcomm Snapdragon 835
Ports USB-C 3.1 Gen 1 USB-C 1.0
Battery 2,915mAh, Fast charging, Qi wireless charging 2,700mAh, Fast charging
Colors Just Black, Clearly White, Not Pink Kinda Blue, Just Black, Clearly White
Price $800 $650

Before wrapping up with Pixel 3, let me show you something…

google_event_image13

And the unending debate begins…. (grabs popcorn) XD

google_event_image12

Google Pixel Stand

google_event_image10

– Google Pixel 3 and 3 XL comes with Google Pixel Stand, the smartest wireless charger ever which charges super-fast even through mobile cases.

– It just not an ordinary charger, it changes the phone when it is charging with the help of Google Assistant.

– You can control phone with your voice or on touch suggestion on the screen.

– It is the fastest and upto 10-Watt charging.

– Quick shortcut of Google Assistant can also turn your Pixel into digital photobooth.

– It will cost around $79 and will be launched with Google Pixel 3.

Google Pixel Slate

google_event_image7

– Google reveals its new Chrome OS tablet with a dual front-firing speakers and a 12.3-inch Molecular Display. Google has adjusted chrome OS to fit the tablet.

– 12 hours battery life

– 8MP rear and front cameras

– Google Assistant with deeper integration

– Split screen

– Do not disturb feature

It will cost around $599 and will be launched later in this year.

Google Home Hub

google_event_image3

– Adding a feather in the cap of Smart Home Speakers, Google introduced Google Home Hub which allows uses to interact with Google Photos, YouTube and much more.

– It has a 7-inch screen.

– Google has decided not in include camera to ensure privacy of its users.

– Hub has a built-in ‘Home View’ section which drags from the top of the screen and provides a single UI to manage our smart home appliances like Smart Door Lock, Smart Lights, Thermostats and many more.

It will cost $149 and will be launched on October 22

Get Your Hands On Pixel

– The pixel 3 will be the first phone that ships with the Youtube Music, Youtube Music will be given to Pixel owner for 6 months free from Google.

– You can get your hands on Pixel 3 as it is available for Pre-Order, with the starting price of 799$.

– Pixel 3 will arrive on 18th October 2018 in U. S. stores.

– Pixel 3 will arrive on listed below countries on 1st November 2018.

Australia, Germany , Italy, spain, USA, Canada, India, Japan, Taiwan, France, Ireland,Singapore , Uk

– Pre-book of Pixel 3 and Pixel 3XL has began on Flipkart.

Custom Keyboard in Swift

$
0
0

Overview

In this tutorial we’ll create a Custom Keyboard extension. Custom keyboard extensions allow you to set your own designed keyboard in apps. It lets user add image in keyboard, that make your keyboard steal the show.

Implementation

Step 1 – Create a new Single View Application

custom_keyboard_image5

Step 2 – Now create keyboard extension, go to File → New → Target.

custom_keyboard_image10

Step 3 – Select Custom keyboard Extension from Application Extension.

custom_keyboard_image1

Provide a name of your Extension and select your programming language.

custom_keyboard_image2

Now, it will include following list of files in your project folder.

custom_keyboard_image3

Adding keyboard

Go to Settings → General → Keyboard → Keyboards

Now Click on Add New Keyboard..

custom_keyboard_image6

Here different types of keyboard are shown such as Suggested, third party and other, click on your custom keyboard under third party keyboards.

custom_keyboard_image11

Now you need to allow full access to the custom keyboard, just to set background image to keyboard.

custom_keyboard_image9

Now you have successfully added your custom keyboard in your device, open any app and change your keyboard to custom keyboard.

Open your keyboard, click on the globe icon and a pop up view will appear, select your keyboard.

custom_keyboard_image8

Now you have successfully set your custom keyboard as default one.

custom_keyboard_image4

Changing Image in Custom keyboard

custom_keyboard_image7

Limitations

After adding Custom keyboard, it doesn’t mean that it will always be available, here are the below examples where custom keyboard doesn’t seem to be available.
→ If the text contain its Input type as Password, when the secureTextEntry is set to true.
→ When the keyboard type is set to UIKeyboardTypePhonePad or UIKeyboardTypeNamePhonePad.
→ Last but not least, if the user declines the use of keyboard extension in AppDelegate Method
application(_ application: UIApplication, shouldAllowExtensionPointIdentifier.

Yudiz Achieves High Praise on Clutch!

$
0
0

Although we’ve been named one of the best mobile app development companies in India, there’s always room for constructive feedback and means to make our team even better. For that reason, we’ve teamed up with Clutch to highlight past projects and create a transparent view of our operations. Anyone viewing our profile will get a true glimpse of our capabilities and how we’ve delivered great projects for world-class clients.

Clutch is a B2B research, ratings, and reviews platform that aims to create perfect matches between the buyers and sellers visiting their site; they cover thousands of companies across 500+ industry verticals ranging from app developers in India to answering services in New York. At the core of their company analysis lies client interviews, where Clutch speaks to a company’s references concerning the challenges, solutions, and results of their time working together. Coupled with this, Clutch’s multi-faceted scoring methodology takes into account a firm’s market presence, clientele, and industry recognition. They are thereby able to identify the absolute best firms in a particular industry.

In their assessment, Clutch has evaluated our capabilities in UX/UI design, mobile app, AR/VR development, and more. Having interviewed a number of clients, here are some of the things they’ve had to say about us:

“Yudiz Solutions offers us significant value. I’ve had a linear increase in requests for digital work from new clients since they came along…my only real metric is client satisfaction, which they deliver every time,” mentioned the Principal Operator of a marketing studio.

They continued, commending our project management expertise:

“They adapt to my quirks and get the job done…communication is right on, which is generally the biggest problem with offshore agencies. Unlike teams who just nod and say yes, they take care to ensure they actually understand me.”

Another client, the Founder of a digital services company, described the measurable impact of our work:

“They’ve had a significant impact on our ability to get business. Once we completed several sophisticated AR projects, we were able to leverage them to pitch for bigger business. We’ve been able to increase our capacity to close contracts thanks to Yudiz Solutions’ support.”

Elaborating, they focused on what they thought made our team unique:

“Besides their technical strengths, they’re a young, honest company and are upfront about their skill set. They won’t lie to secure a project and then scurry to find the talent to back their claims. Furthermore, I appreciate that they’re always available and can have fun while working. It’s hard to find partners like that.”

We were also featured by The Manifest, as one of the best app developers in Ahmedabad! The Manifest is Clutch’s sister website, providing insights on business news and industry lists; it’s great to be recognized on both platforms.

To read more about our clients’ experiences, stay up to date with our Clutch profile here. We encourage you to reach out to our team at any time to begin working on your next great project.

Swift Unmanaged

$
0
0

Overview

What’s use of Unmanaged in swift?

There are certain situations where we need low level lib in our project to communicate with swift code. Also, swift’s ARC memory management requires compiler to manage each object reference. Once the pointer is passed to C Lib, it can’t manage object reference. So we must have to manually manage it. Unmanage provide function to get object reference to swift compiler. Unmanaged wraps a reference to an object.

From Swift to C

There are two methods to carry out this.

  1. Using Unmanaged.passRetained(obj) will perform an unmanaged reference with an unbalanced retain. This grants ownership on whatever C API you’re passing the pointer to. It must be balanced with a release it, otherwise object will leak.

let objPtr = Unmanaged.passRetained(obj).toOpaque()
FunctionCall(objPtr)

  1. Using Unmanaged.passUnretained(obj) will leave the object’s retain count unchanged. This does not grant any ownership on the C API. If object dealloc then app may crash as C internally retain object in any other process.

let objPtr = Unmanaged.passUnretained(obj).toOpaque()
FunctionCall(objPtr)

Here, objPtr is UnsafeMutableRawPointer which is swift’s equivalent to C’s void *

From C to Swift

To retrieve a reference from C into Swift code, we need to create an Unmanaged value from the raw pointer, then take a reference from it

  1. Using takeRetainedValue will do an unbalanced release, also will balance an unbalanced retain previously performed with passRetained, or by C code which confers ownership on your code.

let obj = Unmanaged<ClassName>.fromOpaque(objPtr).takeRetainedValue()
obj.method()

  1. Using takeUnretainedValue will obtain the object reference without performing any retain or release.

let obj = Unmanaged<ClassName>.fromOpaque(objPtr).takeUnretainedValue()
obj.method()

Communicate with low level C function in swift.

Viewing all 595 articles
Browse latest View live