Quantcast
Channel: Yudiz Solutions Ltd.
Viewing all 595 articles
Browse latest View live

Advance authentication using fingerprint in your Android application

$
0
0

Overview

This blog is about implementing advance authentication using fingerprint in android app as Android Marshmallow and above version has introduced a set of API that makes easy to use touch sensor to develop an Android fingerprint app.

Why to use fingerprint authentication?

There are several benefits you will get by putting fingerprint authentication in your application

  • Quick and reliable way of authenticating user’s identity
  • Secure: Using Fingerprint authentication, online transactions become more convenient as Unique fingerprints assure that it’s unlocked just by you and impossible to guess.

Few steps to follow to enable fingerprint authentication in your app.

  • Verify that the device is running on Android 6.0 (M)(minSdkversion-23) or above
  • Verify that the device features a fingerprint sensor.
  • Verify that the lock screen is protected by PIN, password or pattern and at least one fingerprint is registered on the smartphone
  • Get access to Android keystore to store the key used to encrypt/decrypt an object
  • Generate an encryption key and the Cipher
  • Start the authentication process
  • Implement a callback class to handle authentication events

Updating Manifest

  • First of all, We need to add USE_FINGERPRINT permission in your AndroidManifest.xml file
    <uses-permission android: name="android.permission.USE_FINGERPRINT" />
  • App is going to require access to the device’s touch sensor in order to receive fingertip touch events:
  • By adding the following code to your application, you can make fingerprint authentication as necessary in your app then declare that your app requires a touch sensor.
    <uses-feature android :name="android.hardware.fingerprint" android: required=“true"/>
  • Above code will let users install app on specific devices that fulfil this hardware requirement and prevent your app from being installed on devices that don’t include this piece of hardware.
  • However , it’s good practice to make touch sensor as preferred, but not required so that Google Play will then permit users to download your app even if their device doesn’t have a fingerprint sensor.
    <uses-feature android: name="android.hardware.fingerprint" android: required=“false”/>
  • If you do opt for this approach, then your app will need to check for the presence of a touch sensor at runtime and then disable its fingerprint authentication features, where appropriate.

User Interface

activity_main.xml

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
   xmlns:tools="http://schemas.android.com/tools"
   android:id="@+id/activity_fingerprint"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:background="@color/colorPrimary"
   tools:context=“com.yudiz.FingerprintActivity”>

   <LinearLayout
       android:layout_width="match_parent"
       android:id="@+id/headerLayout"
       android:orientation="vertical"
       android:gravity="center"
       android:layout_marginTop="100dp"
       android:layout_height="wrap_content">

       <ImageView
           android:layout_width="70dp"
           android:layout_height="70dp"
           android:src="@drawable/ic_action_fingerprint"
           android:id="@+id/icon"
           android:paddingTop="2dp"    
           android:layout_marginBottom="30dp"/>

       <TextView
           android:layout_width="match_parent"
           android:layout_height="wrap_content"
           android:textColor="@color/textPrimary"
           android:textSize="16sp"
           android:textAlignment="center"
           android:gravity="center"
           android:id="@+id/desc"
           android:text="please place your finger to verify your identity"
           android:layout_margin="16dp"
           android:paddingEnd="30dp"
           android:paddingStart="30dp"/>

       <TextView
           android:layout_width="match_parent"
           android:layout_height="wrap_content"
           android:textColor="@color/textPrimary"
           android:textSize="16sp"
           android:textAlignment="center"
           android:gravity="center"
           android:paddingEnd="30dp"
           android:id="@+id/desc"	
           android:layout_margin="16dp" 
           android:paddingStart="30dp"/>

  </LinearLayout>
</RelativeLayout>

It’s time to create fingerprint authentication part

Part 1: Check whether the device has the hardware, software and settings required to support fingerprint authentication

  • Verify Secure lock screen using keyguardManager and FingerPrintManager
    KeyguardManager keyguardManager =
    		     	 (KeyguardManager) getSystemService(KEYGUARD_SERVICE);
    
    FingerprintManager fingerprintManager =
         		(FingerprintManager) getSystemService(FINGERPRINT_SERVICE);
  • Verify hardware requirement, Runtime permissions and software settings

If all the conditions are met then app is ready to start authentication process

Part 2: Create the key, cipher and CryptoObject that we’ll use to perform the actual authentication.

  • First of all, by generating keystore instance, gain access to keystore instance which allows you to store cryptographic keys which makes difficult to access device.
  • Generate app’s unique encryption key
  1. Obtain a reference to the Keystore using the standard Android keystore container identifier (“AndroidKeystore”)
  2. Generate key
  3. Initialize an empty keystore
  4. Initialize a keyGenerator
  5. Configure this key so that the user has to confirm their identity with a fingerprint each time they want to use it

keyStore = KeyStore.getInstance("AndroidKeyStore");
keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES,"AndroidKeyStore");
keyStore.load(null);
keyGenerator.init(new
KeyGenParameterSpec.Builder(KEY_NAME,
KeyProperties.PURPOSE_ENCRYPT |
KeyProperties.PURPOSE_DECRYPT)
	  .setBlockModes(KeyProperties.BLOCK_MODE_CBC)
      	  .setUserAuthenticationRequired(true)
      	  .setEncryptionPaddings(
 KeyProperties.ENCRYPTION_PADDING_PKCS7).build());
 keyGenerator.generateKey();

Initialise cipher that will be used to create the encrypted FingerprintManager.

Cipher cipher = Cipher.getInstance(
       KeyProperties.KEY_ALGORITHM_AES + "/"
          + KeyProperties.BLOCK_MODE_CBC + "/"
          + KeyProperties.ENCRYPTION_PADDING_PKCS7);
keyStore.load(null);
SecretKey key = (SecretKey) keyStore.getKey(KEY_NAME, null);
cipher.init(Cipher.ENCRYPT_MODE, key);

Assign the CryptoObject by creating cipher instance to the instantiated and various other checks before initiating the authentication process.

cryptoObject = new FingerprintManager.CryptoObject(cipher);
   Helper helper = new FingerprintHandler(this);
   helper.startAuth(fingerprintManager, cryptoObject);

Part 3: Create Helper Class

  • Create helper class which extends FingerprintManager.AuthenticationCallback which will override 4 methods:
    1. onAuthenticationFailed() willl be called whenever fingerprint doesn’t match with anyone’s fingerprint registered on device.
    2. onAuthenticationError(int errMsgId, CharSequence errString) will be called when fatal error has occurred.
    3. onAuthenticationSucceeded(FingerprintManager.AuthenticationResult result) will be called when fingerprint has been successfully matched.
    4. onAuthenticationHelp(int helpMsgId, CharSequence helpString).
      This is one of the important methods which will be called when a non Fatal error has occurred which provides additional information about error.
  • Initialize CancellationSignal method whenever your app can no longer process user input. If you don’t use this method, then other apps will be unable to access the touch sensor, including the lockscreen!!!!!

MainActivity.java

public class MainActivity extends AppCompatActivity {

    private KeyStore keyStore;
    private static final String KEY_NAME = "fprint";
    private Cipher cipher;
    private CryptoObject cryptoObject; 
    private KeyGenerator keygenerator;
    private KeyguardManager keyguardManager;
    private FingerprintManager fingerprintManager;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        mBinding= DataBindingUtil.setContentView(this,R.layout.activity_main);
	if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
   	keyguardManager =
		     	 (KeyguardManager) getSystemService(KEYGUARD_SERVICE);

  	  fingerprintManager =
      		(FingerprintManager) getSystemService(FINGERPRINT_SERVICE);

   if (!fingerprintManager.isHardwareDetected()) {

          mBinding.desc.setText("Your device doesn't support fingerprint authentication");
    }

/* Runtime permission */
      if (ActivityCompat.checkSelfPermission(this, Manifest.permission.USE_FINGERPRINT) 
						!=PackageManager.PERMISSION_GRANTED) {

     mBinding.desc.setText("Please enable the fingerprint permission");
    }

    if (!fingerprintManager.hasEnrolledFingerprints()) {
      mBinding.desc.setText("No fingerprint found. Please register minimum one fingerprint");
    }
 if (!keyguardManager.isKeyguardSecure()) {
  mBinding.desc.setText("Please enable lockscreen password in your device's Settings");
    } else {
      try {
 	generateEncryptionKey();
      } catch (FingerprintException e) {
        e.printStackTrace();
      }

      if (initializeCipher()) {
            cryptoObject = new FingerprintManager.CryptoObject(cipher);
	 Helper helper = new Helper(this);
        	 helper.startAuth(fingerprintManager, cryptoObject);
      }
    }
 }
}
private void generateEncryptionKey() throws FingerprintAuthException {
  try {

   keyStore = KeyStore.getInstance("AndroidKeyStore");
   keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, 											"AndroidKeyStore");
   keyStore.load(null);
   keyGenerator.init(new KeyGenParameterSpec.Builder(KEY_NAME,
    KeyProperties.PURPOSE_ENCRYPT |
        KeyProperties.PURPOSE_DECRYPT)
   .setBlockModes(KeyProperties.BLOCK_MODE_CBC)
   .setUserAuthenticationRequired(true)
   .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_PKCS7)
   .build());
    keyGenerator.generateKey();

  } catch (KeyStoreException
       | NoSuchAlgorithmException
       | NoSuchProviderException
       | InvalidAlgorithmParameterException
       | CertificateException
       | IOException exc) {
    exc.printStackTrace();
  throw new FingerprintAuthException(exc);
  }
}

public boolean initializeCipher() {
  try {

    cipher = Cipher.getInstance(
       KeyProperties.KEY_ALGORITHM_AES + "/"
          + KeyProperties.BLOCK_MODE_CBC + "/"
          + KeyProperties.ENCRYPTION_PADDING_PKCS7);
  } catch (NoSuchAlgorithmException |
      NoSuchPaddingException e) {
   throw new RuntimeException("Failed to get Cipher", e);
  }

  try {
    keyStore.load(null);
    SecretKey key = (SecretKey) keyStore.getKey(KEY_NAME,
          null);
    cipher.init(Cipher.ENCRYPT_MODE, key);
    return true;
  } catch (KeyPermanentlyInvalidatedException e) {
 return false;
  } catch (KeyStoreException | CertificateException
             | UnrecoverableKeyException | IOException
             | NoSuchAlgorithmException | InvalidKeyException e) {
      throw new RuntimeException("Failed to init Cipher", e);
  }
}

private class FingerprintAuthException extends Exception {
  public FingerprintAuthException(Exception e) {
           super(e);
  }
}

Helper.java

public class Helper extends FingerprintManager.AuthenticationCallback {

    private Context context;
   public Helper(Context mContext) {
        context = mContext;
    }

    public void startAuth(FingerprintManager manager, FingerprintManager.CryptoObject cryptoObject) {
        CancellationSignal cancellationSignal = new CancellationSignal();
        if (ActivityCompat.checkSelfPermission(context, 	Manifest.permission.USE_FINGERPRINT) != PackageManager.PERMISSION_GRANTED) {
            return;
        }
        manager.authenticate(cryptoObject, cancellationSignal, 0, this, null);
    }

    @Override
    public void onAuthenticationError(int errMsgId, CharSequence errString) {
        this.update("Fingerprint Authentication error\n" + errString);
    }

    @Override
    public void onAuthenticationHelp(int helpMsgId, CharSequence helpString) {
        this.update("Fingerprint Authentication help\n" + helpString);
    }

    @Override
    public void onAuthenticationFailed() {
        this.update("Fingerprint Authentication failed.");
    }

    @Override
    public void onAuthenticationSucceeded(FingerprintManager.AuthenticationResult result) {
        ((Activity) context).finish();
        TextView textView = (TextView) ((Activity)context).findViewById(R.id.desc);
        textView.setText("Fingerprint authentication succeded");

   }

    private void update(String e){
        TextView textView = (TextView) ((Activity)context).findViewById(R.id.desc);
        textView.setText(e);
    }

}

How To Test your App In Android Emulator

  • To test the app, it is possible to use a real device that has a touch sensor. Anyway, it is possible to test the app in the emulator too.
  • To use this app in Android Emulator, you have to first configure the fingerprint accessing to the Security menu. When the system asks for fingerprint you have to use the adb command to emulate the finger touch:
  • Open your Mac’s Terminal (or Command Prompt if you’re a Windows user) then change directory (cd) so it’s pointing at your Android SDK download; specifically, the Android/sdk/platform-tools folder and fire this command.
    adb -e emu finger touch <finger_id>
  • On Windows, you may have to run telnet 127.0.0.1 followed by finger touch .

The image below shows the app in action:


UIView Animation with Swift 4

$
0
0

Overview

Out of all the blogs that I wrote, this is a special one. I have come across many animations like interactive, non- interactive, transitions, core-animation, UIView animations, etc. I have googled many third-party libraries and used them too but at the end, dependencies were the obstacles in the way and it is very difficult to modify the library code as per user requirement. So I decided to create few animation methods which can be easily used and modified as per any requirement.

Animation is a better way to represent your application. Animations make app’s user experience more joyful but they are often hard to implement but in this tutorial it will be fun and easy to customize, implement in the app. A good animation shows your skill, potential and a way to handle complex user interface with the better user experience.

The following video shows the demo application.

I’m assuming you are familiar with creating new xcode project, so I’m skipping that part.

Animation Configuration:

Animation requires some configuration, so we are going to create class for that with default values, which are also going to be used for the random animations.

offset: Amount of movement in points will also depend on direction type like (right, left, top, bottom) along with distance.

static var offset: CGFloat = 30.0

duration: Animation duration

static var duration: Double = 0.35

interval: Handling multiple views animation which needs to be animated one after the other and not at the same time

static var interval: Double = 0.075

maxZoomScale: Maximum zoom to be applied in animation.

static var maxZoomScale: Double = 2.0

maxRotationAngle: Maximum rotation to be applied in animation.

static var maxRotationAngle: CGFloat = .pi / 4

AnimationConfiguration class:

//MARK:- AnimationConfiguration
class AnimationConfiguration {

    static var offset: CGFloat = 30.0

    //Duration of the animation.
    static var duration: Double = 0.35

    //Interval for animations handling multiple views that need to be animated one after the other and not at the same time.
    static var interval: Double = 0.075

    static var maxZoomScale: Double = 2.0

    //Maximum rotation (left or right)
    static var maxRotationAngle: CGFloat = .pi / 4

}

Animation Directions:

Animation direction type will identify the flow of animation with possible values which are vertical or horizontal. Depending on the flow of design which is integrated by using UITableView or UICollectionView.

Here I have decided the type of directions that are top, bottom, right and left. Variable declaration of isVertical will check if the animation should go on the X or Y axis and for the isPositive will determine the value is positive or negative.

Random functions will return any random animation direction type.

//MARK:- DirectionType
enum AnimationDirectionType: Int {

    case top
    case bottom
    case right
    case left

    var isVertical: Bool {
        switch self {
        case .top, .bottom:
            return true
        case .left, .right:
            return false
        }
    }

    var isPositive: CGFloat {
        switch self {
        case .top, .left:
            return -1
        case .right, .bottom:
            return 1
        }
    }

    //Random direction.
    static func random() -> AnimationDirectionType {
        let rawValue = Int(arc4random_uniform(4))
        return AnimationDirectionType(rawValue: rawValue)!
    }
}

Animation Type:

Following animation types are available to perform:
from: Animation with direction and offset point.

case from(direction: AnimationDirectionType, offSet: CGFloat)

zoom: Zoom animation.

case zoom(scale: CGFloat)

rotate: Rotation animation.

case rotate(angle: CGFloat)

To create corresponding CGAffineTransform for AnimationType.from user need to declare the variable with return type CGAffineTransform.

Here switch case is used for the self enumeration. Different case handles the different type of animations:
case .from: Allows the direction of animation to be top, bottom, left or right and the offset value of animation from where we need to start.

case .from(direction: let direction, offSet: let offSet):

case .zoom: Take value in scale(CGFLoat) for X and Y axis.

case .zoom(scale: let scale):

case .rotate: Take value in angle(CGFLoat). Rotation will work in left or right direction, it also can be from center or at specific point.

case .rotate(angle: let angle):

You can create a new case or modify the existing one as per the requirement. Play around by changing the point of value to get desired animation.

initialTransform:

var initialTransform: CGAffineTransform {
        switch self {
        case .from(direction: let direction, offSet: let offSet):
            let positive = direction.isPositive
            if direction.isVertical {
                return CGAffineTransform(translationX: 0, y: offSet * postive)
            }
            return CGAffineTransform(translationX: offSet * postive, y: 0)
        case .zoom(scale: let scale):
            return CGAffineTransform(scaleX: scale, y: scale)
        case .rotate(angle: let angle):
            return CGAffineTransform(rotationAngle: angle)
        }
    }

One last method which provide newly generated random animation type.

//Generated random animation.
    static func random() -> AnimationType {
        let index = Int(arc4random_uniform(3))
        if index == 1 {
            return AnimationType.from(direction: AnimationDirectionType.random(),
                                      offSet: AnimationConfiguration.offset)
        } else if index == 2 {
            let scale = Double.random(min: 0, max: AnimationConfiguration.maxZoomScale)
            return AnimationType.zoom(scale: CGFloat(scale))
        }
        let angle = CGFloat.random(min: -AnimationConfiguration.maxRotationAngle, max: AnimationConfiguration.maxRotationAngle)
        return AnimationType.rotate(angle: angle)
    }

UIView extension animation related methods:

Global declaration of completion block

//CompletionBlock
typealias CompletionBlock = (() -> ())

Following animation methods include the parameters:

withType: It’s an array of AnimationType to be performed on block.
reversed: Initial state of the animation. Reverse will start from its original position.
initialAlpha: Initial alpha of the view prior to the animation.
finalAlpha: View’s alpha after the animation.
delay: Time Delay before the animation.
duration: TimeInterval animation will take to complete.
animationInterval: TimeInterval between each of the subviews animations.
backToOriginalForm: View will restore to its identity.
completion: CompletionBlock after the animation finishes.

func animate(withType: [AnimationType], reversed: Bool = false, initialAlpha: CGFloat = 0.0, finalAlpha: CGFloat = 1.0, delay: Double = 0.0, duration: TimeInterval = AnimationConfiguration.duration, backToOriginalForm: Bool = false, completion: CompletionBlock? = nil) {

        let transformFrom = transform
        var transformTo = transform

        withType.forEach { (viewTransform) in
            transformTo = transformTo.concatenating(viewTransform.initialTransform)
        }

        if reversed == false {
            transform = transformTo
        }

        alpha = initialAlpha

        DispatchQueue.main.asyncAfter(deadline: .now() + delay) {
            UIView.animate(withDuration: duration, delay: delay, options: [.curveLinear, .curveEaseInOut], animations: { [weak self] in
                self?.transform = reversed == true ? transformTo : transformFrom
                self?.alpha = finalAlpha
            }, completion: { (_) in
                completion?()
                if backToOriginalForm == true {
                    UIView.animate(withDuration: 0.35, delay: 0.0, options: [.curveLinear, .curveEaseInOut], animations: { [weak self] in
                        self?.transform = .identity
                    }, completion: nil)
                }
            })
        }
    }

Above method will animate to a particular view, subview or contentView but that’s not fun part, we need to think first to perform animation on all views. To perform animation on all subview of main view require some fraction of delay. Animation will work only if there is a delay between subviews, right?

animateAll method require following parameters:
withType: Type of animations.
interval: Interval time of the animation between subviews.

func animateAll(withType: [AnimationType], interval: Double = AnimationConfiguration.interval) {
    for(index, value) in subviews.enumerated() {
       let delay = Double(index) * interval
       value.animate(withType: withType, delay: delay)
    }
}

Now we can animate all views, then what about random animation for all views including subviews or contentviews?

AnimationRandom method require only one parameter:
interval: Interval time of the animation between subviews.

func animationRandom(interval: Double = AnimationConfiguration.interval) {
    for(index, value) in subviews.enumerated() {
       let delay = Double(index) * interval
       let animationRandom = AnimationType.random()
       value.animate(withType: [animationRandom], delay: delay)
    }
}

After creating all animation methods it requires to restore everything back to its identity including subviews. Following method will help you restore the identity.

//It will restore all subview to it's identity
    func restoreAllViewToIdentity() {
        for(_, value) in subviews.enumerated() {
            value.transform = CGAffineTransform.identity
        }
    }

Here are some examples of how to use the code:

Animate all:

let bottomAnimation = AnimationType.from(direction: .bottom, offSet: 30.0)
self.collectionView.animateAll(withType: [bottomAnimation])

Combine animation with completion block:

let zoomOutAnimation = AnimationType.zoom(scale: 0.3)
let angle = AnimationType.from(direction: .bottom, offSet: 30.0)
self.collectionView.animate(withType: [angle, zoomOutAnimation], reversed: true, initialAlpha: 0.0, finalAlpha: 1.0, delay: 0.1, duration: 0.5, backToOriginalForm: true, completion: {
    //
})

Animate main view including the subview with random animation:

view.animationRandom()

Animating cell including content of it:

for subViews in self.tableView.visibleCells {
    let bottomAnimation = AnimationType.from(direction: .bottom, offSet: 30.0)
    subViews.contentView.animateAll(withType: [bottomAnimation])
}

Here is a full source code link. Take a look, dig it, feel free to contribute. Any changes/suggestions are welcome. Please create pull request for contribution.

If you like this type of blog and are interested in future blog of core-animation like pulsating, facebook feed loading animation, or like button animation when facebook live streaming and many more are on the way. Let us know your feedback.

Now you are ready to create your own cool animations.

Bitcoin : Investment System or Technology (innovation)

$
0
0

Overview

This blog is about to clear the misconception about of bitcoin. In this, we will see what is actually bitcoin and which technology makes bitcoin popular.

Assumption about Bitcoin

People have many wrong assumptions about bitcoin that it is just an investment system where they invest money and get good returns. But, that’s not true.

Bitcoin is the new innovation in technology and this innovation is not less than the Wright brothers invention who invented the airplane.

bitcoin-assumption

Right now you must be laughing and thinking that “bitcoin and innovation !!”. But, after reading this full blog you’ll agree with me.

Problems of Traditional Currency

“As many as 576,000 Iraqi children may have died since the end of the Persian Gulf war because of economic sanctions imposed by the Security Council, according to two scientists who surveyed the country for the Food and Agriculture Organization.”

bitcoin-problems

My intention is not to blame anyone but, I just try to explain that this type of traditional currency system that is controlled by the central authority is forced to take a shocking and ruthless decision and we people have to bear an impact of the decision.

Current Banking Issues :

1. Banks have become synonymous with crises and crashes due to depression and fractional reserve banking :

Latest real example of bank fraud,scam

Another real example of bank fraud,

Kingfisher airlines owner vijay mallya fraud with SBI bank .

scam-statistics

Some of this frauds are caused by employees of bank itself.As per survey, 450 employees were involved in this types of fraud in different types of sector banks during April-December 2016, a total value of fraud was Rs.17750.25 crore.

So, we lost our valuable money just because of this type of bank failures.There are so many real examples of bank frauds but no one cares about this problem or its solution? But with new technology adoption we can solve this type of problems. So, we will see how bitcoin solve this type of problem in later part of blog.

2. Double Spending :

Double-spending is a potential flaw in a digital cash scheme in which the same single digital token can be spent more than once.This is possible because a digital token consists of a digital codes that can be easily copied.

As with counterfeit money, such double-spending leads to inflation by creating a new amount of fraudulent currency that did not previously exist.

Let’s , try to understand double spending problem with simple example :

We take example of e-mail system to understand double spending. Let’s assume my e-mail id is niravmodi@example.com and I send a photo to pnb@example.com and send the same photo to bob@example.com. As I had the original image, I used the same image to send another.

Now assume that this type of technique I use in digital cash system where I convert my national currency to digital cash and as I am a software programmer I just need to crack that digital cash code. Then I use the same code to generate new copies of digital cash.

And in the last if you want to prevent this problem in digital cash you have to be dependent on third party who authenticates. But, this is not a proper solution here also we encounter problems like trust on third party, failure of third party etc.

Satoshi Nakamoto & Bitcoin

“SATOSHI NAKAMOTO”, the person or group of persons that observed these major problems of traditional currency.

Before 2008, people only dreamt about the digital currency because they believed that the concept of digital currency was not possible in real life.

But satoshi nakamoto came up with the solution of traditional currency problem in October 2008 and published a white paper of BITCOIN.

People and Government were shocked after reading this white paper.

Because satoshi nakamoto were the first to solve the double spending problem for the digital currency.

They solved this problem with the support of BLOCKCHAIN technology.

We will go deep in blockchain technology in my upcoming blogs because our main motive is to aware people about the blockchain technology and its power.

Bitcoin

Bitcoin is a cryptocurrency and a digital payment system.

Bitcoin is the first distributed digital crypto-currency.

Bitcoin system works without single administrator or central bank.

The network is peer-to-peer and transactions take place between users directly, without an intermediary.

Bitcoin was invented by Satoshi Nakamoto (“an unknown person or group of people”) and released as open-source software in 2009.

How Bitcoin solve above Problems?

  1. How bitcoin can solve problem of shocking and ruthless decision taken by central authority of traditional currency system?

    I know, You are getting some answer to this question from the introduction of bitcoin but let me try to explain,First of all, the bitcoin is not owned by the central bank or single administrator.

    So, any decision is not taken by a single authority.

    It is owned by people who use it. Everyone has its own bank.

    So, people can take the decision on its own no one can control their money and no one has rights to track their money.

  2. How bitcoin solve bank frauds using blockchain technology?

    Bitcoin blockchain has a distributed transparent ledger.The ledger is public for all to access. But, no one can modify data in this ledger because of its immutable nature.

    So, try to understand both the above lines :

    First distributed transparent ledger means the transaction data cannot be stored on a single system( or we can say the server ). It is stored and distributed on connected nodes (PCs) in the bitcoin blockchain p2p network. Everyone who is connected to this network has a transaction ledger copy so anyone who tries to modify data can be detected by others who have the same copy of this ledger.

    By blockchain’s immutable nature, the data on bitcoin blockchain cannot be hacked.

    By using implementation of blockchain technology in bank transaction system they can stop frauds like fraud done in the pnb bank.

    Why I am saying this because blockchain removes the middle man and bring trust in the system.

    All transaction are stored and validated by computer codes and secured by the cryptographic algorithm in a blockchain.

  3. How bitcoin solve Double spending problem?

    As we study further bitcoin has the support of blockchain that prevents the double spending problem.So, you might have a question that how blockchain solve this double spending problem, Right?For that, we have to go deep into blockchain technology which I will discuss in next blogs but for now I just simply say that blockchain prevents double spending by its block structure mechanism.

Misconception about Bitcoin and Blockchain

bitcoin-blockchain

Here is the major misconception or confusion of people that bitcoin and blockchain are same and if bitcoin don’t get success then can’t even blockchain get.

But, that’s not true.

Bitcoin currency and other cryptocurrencies are one of the famous applications of blockchain that runs on the blockchain.

The blockchain is a very vast technology and most promising technology of future.

I hope this knowledge of about bitcoin and blockchain are helpful for you and stay tuned for my upcoming blogs.

Android P Features & API

$
0
0

Overview

Google released the latest version of Android that is Android P. This is not a full version. It’s just a preview version for developers. Google did not declare What the P stands for. So, in my article I will use the name of this released version as Android P.

Introduction

Android P introduces great new features and capabilities for users and developers. This article highlights what are the new features for the developers.

Let’s discuss what’s new in Android P. Here are some notable changes for developers as follows:

  • Indoor Positioning with Wi-Fi RTT
  • Display cutout support
  • Notifications
  • Multi-camera support and camera updates
  • ImageDecoder for bitmaps and drawables
  • Animation
  • HDR VP9 Video, HEIF image compression, and Media APIs
  • Data cost sensitivity in JobScheduler
  • Neural Networks API 1.1
  • Autofill framework
  • Security enhancements
  • Client-side encryption of Android backups

Indoor Positioning with Wi-Fi RTT

To leverage advantage of indoor positioning in your apps, Android P introduces platform support for the IEEE 802.11mc Wi-Fi protocol. This protocol is also known as Wi-Fi Round-Trip-Time (RTT).

Using this new RTT APIs, your app can measure the distance to nearby RTT-capable Wi-Fi Access Points.

For enabling this feature in your app, the device must be location enabled, and Wi-Fi scanning should be turned on (Setting -> Location). And for that we have to set the following permission:

ACCESS_FINE_LOCATION

By this, our app will be eligible to get indoor positioning data down to a meter or two meters. It works by measuring the distance to various Wi-Fi access points. Using this feature, you can develop new ideas like “in-building, navigation” and “fine- grained location-based services”.

Display cutout support

Android P offers support for Display cutout, it means edge to edge screens for camera & speaker. Android P introduces a new DisplayCutout class, which finds out the location and shape of defined areas where the app content shouldn’t be displayed.

To get that defined area’s existence, we can use the following method:

getDisplayCutout()

cutout

There is now an attribute which allows our application to set display cutout around the device screen, which is known as layoutInDisplayCutoutMode.
We can set the following values for this attribute:

  • LAYOUT_IN_DISPLAY_CUTOUT_MODE_DEFAULT
  • LAYOUT_IN_DISPLAY_CUTOUT_MODE_ALWAYS
  • LAYOUT_IN_DISPLAY_CUTOUT_MODE_NEVER

Notifications

Android P introduces many improvements to Notifications as following:

notifcation

Enhanced messaging experience

Support for displays images

setData()

Identify people involved in conversation

Notification.Person

Save replies as a draft:

EXTRA_REMOTE_INPUT_DRAFT

Identify the type of conversation like Group or Single conversation

setGroupConversation()

notifcation2

Set the semantic action for an intent
Ex. Mark as read, delete, reply.

setSemanticAction()

Smart reply

RemoteInput.setChoices()

Channel settings, broadcasts, and Do Not Disturb

Blocking channel groups

isBlocked()

New broadcast intent types

– Using NotificationManager reference.

New Do Not Disturb priority categories

PRIORITY_CATEGORY_ALARMS,
PRIORITY_CATEGORY_MEDIA_SYSTEM_OTHER

Multi-camera support and camera updates

Android P supports access to streams from one or more cameras simultaneously. Using this new feature, we can develop innovative features with dual-front or dual back cameras, that we can’t do with single camera like seamless zoom, bake and stereo vision. We can automatically switch between two or more cameras using a logical or fused camera.

muticamera

Android P also introduces many features related to camera as follows:

  • Flash support
  • Image stabilisation
  • Special effects
  • External USB/UVC cameras
  • Reduce delay during initial capture

ImageDecoder for bitmaps and drawables

Android P introduces modern concepts for image decoding. We can use ImageDecoder class to decode images.

Create a Drawable or a Bitmap from byte buffer, a file or a URI

createSource()
decodeBitmap()
decodeDrawable()
onHeaderDecoded()

Scale decoded image within the exact size

setResize()

Crop image within the range of scale, image cell

setCrop()

Create a mutable Bitmap

setMutable(true)

Animation

Android P introduces a class for drawing & dimpling a GIF and WebP animated images which is AnimatedImageDrawable. It works similar to AnimatedVectorDrawable. Using AnimatedImageDrawable, app will animate image without interfering with your app UI thread.

HDR VP9 Video, HEIF image compression, and Media APIs

Android P supports High Dynamic Range (HDR) VP9 Profile 2. So now we can transfer HDR-enabled data to user from YouTube, Play Movies, and other data sources on HDR-capable devices.

HEIF image encoding is supported by Android P

  • Supports Image compression to save on storage and Network data.
  • Easy to utilize images from the server

MediaPlayer2 is also a new feature for Android P. This player supports to build playlist using DataSourceDesc.

MediaPlayer2.create()

Data cost sensitivity in JobScheduler

Android P provides features to handle network related jobs for the user.

When below methods help appropriately, JobScheduler handle the work properly.

setEstimatedNetworkBytes()
setIsPrefetch()
setRequiredNetwork()

When Job runs perfectly be sure to use the Network object returned by JobParameters.getNetwork()

Neural Networks API 1.1

Neural Network API introduced in Android 8.1 for device machine learning in Android. Android P enhances support for nine new ops:

  • Pad
  • BatchToSpaceND
  • SpaceToBatchND
  • Transpose
  • Strided Slice
  • Mean
  • Div
  • Sub
  • Squeeze

Security enhancements

Android P introduces many new security features:

  • Unified fingerprint authentication dialog
  • High-assurance, user confirmation of sensitive transactions

Client-side encryption of Android backups

Android P supports encryption of client side secret data in Android backup. Using this feature device are able to restore user’s PIN, Pattern or Password which is backed up by the user’s device.

Develop your own Swift Framework

$
0
0

Overview

Developing your own swift framework has below benefits.

  1. Reusable code
  2. Secure or Hide your code
  3. Reduce recompilation
  4. Reduce recompilation
  5. Save time

Tool Used: Xcode 8.3.2, Swift 3+

To create your own framework in swift you just have to follow below 5 steps.

Step 1: Setup Framework Project

Create new Xcode project.

Click on ‘Cocoa Touch Framework’.

Write a name of your project, we have used ‘YudizFramework’. (Make sure to choose swift language) and Click on Next button.

Add new file inside ‘YudizFramework’ folder. (Press ⌘ + N or Click on File -> New -> File…)

Click on ‘Cocoa Touch Class’.

Write File Name ‘YudizFramework’ and Subclass of ‘NSObject’.

Step 2: Write some code

Write following code inside ‘YudizFramework.swift’

open class YudizFramework: NSObject {

    open class func logToConsole(_ msg: String) {
        print(msg);
    }
}

Note:- Make sure your class and method are open.

Step 3: To create Universal Framework

Add new ‘Aggregate’ target to your project.

And Add below script in ‘Run Script’ to it.

#!/bin/sh

UNIVERSAL_OUTPUTFOLDER=${BUILD_DIR}/${CONFIGURATION}-universal

# make sure the output directory exists
mkdir -p "${UNIVERSAL_OUTPUTFOLDER}"

# Step 1. Build Device and Simulator versions
xcodebuild -target "${PROJECT_NAME}" ONLY_ACTIVE_ARCH=NO -configuration ${CONFIGURATION} -sdk iphoneos  BUILD_DIR="${BUILD_DIR}" BUILD_ROOT="${BUILD_ROOT}" clean build
xcodebuild -target "${PROJECT_NAME}" -configuration ${CONFIGURATION} -sdk iphonesimulator ONLY_ACTIVE_ARCH=NO BUILD_DIR="${BUILD_DIR}" BUILD_ROOT="${BUILD_ROOT}" clean build

# Step 2. Copy the framework structure (from iphoneos build) to the universal folder
cp -R "${BUILD_DIR}/${CONFIGURATION}-iphoneos/${PROJECT_NAME}.framework" "${UNIVERSAL_OUTPUTFOLDER}/"

# Step 3. Copy Swift modules from iphonesimulator build (if it exists) to the copied framework directory
SIMULATOR_SWIFT_MODULES_DIR="${BUILD_DIR}/${CONFIGURATION}-iphonesimulator/${PROJECT_NAME}.framework/Modules/${PROJECT_NAME}.swiftmodule/."
if [ -d "${SIMULATOR_SWIFT_MODULES_DIR}" ]; then
cp -R "${SIMULATOR_SWIFT_MODULES_DIR}" "${UNIVERSAL_OUTPUTFOLDER}/${PROJECT_NAME}.framework/Modules/${PROJECT_NAME}.swiftmodule"
fi

# Step 4. Create universal binary file using lipo and place the combined executable in the copied framework directory
lipo -create -output "${UNIVERSAL_OUTPUTFOLDER}/${PROJECT_NAME}.framework/${PROJECT_NAME}" "${BUILD_DIR}/${CONFIGURATION}-iphonesimulator/${PROJECT_NAME}.framework/${PROJECT_NAME}" "${BUILD_DIR}/${CONFIGURATION}-iphoneos/${PROJECT_NAME}.framework/${PROJECT_NAME}"

# Step 5. Convenience step to copy the framework to the project's directory
cp -R "${UNIVERSAL_OUTPUTFOLDER}/${PROJECT_NAME}.framework" "${PROJECT_DIR}"

# Step 6. Convenience step to open the project's directory in Finder
open "${PROJECT_DIR}"

Step 4: Build your Framework

Build your Framework by choosing ‘UniversalYudizFramework’ target pressing ⌘ + B.

After build is completed, Finder opens with your Framework in that.

Step 5: Add this framework to your any of your project

import UIKit
import YudizFramework

class ViewController: UIViewController {

    override func viewDidLoad() {
        super.viewDidLoad()

      YudizFramework.logToConsole("Share it if you liked it. 📢")

    }

}

Note:- If you are getting following error,

… Reason: image not found

Make sure your framework is added in both ‘Embedded Binaries’ and ‘Linked Framework and Libraries’.

AR Emoji vs. Animoji

$
0
0

Overview

Only three months ago Apple introduced Animoji with new iPhone X. Animoji is an animated emoji represented with various animals like monkey, cat, dog or fox. Apple Animoji can be considered the first real life application of AR(Augmented reality) to make our conversations more interesting and exciting.
AR emoji happens to be a cool new feature that Samsung introduced with the new Samsung galaxy s9 & s9+.

Animoji:

A talking cat, a singing dog and a smiling fox they are all called Animoji. When you speak, or talk, or change your eye expression, or shake your head, the Animoji will do the exact same.

animoji-1

Apple uses front camera to capture your expression on the iPhone X to create 3D emoji of your facial expressions. You can only record new animoji within the iMessages app in iOS 11 on iPhone X, though you can also share them with other apps. Right now 12 emoji character are available to create animoji.

AR Emoji:

You can create your AR emoji with a new galaxy s9 & s9+. AR emoji capture your face with a camera and create emoji that actually looks like you.

animoji-2

Before you can start using AR emoji, you have to set it up. This is simple process. First you have to open a camera app and select AR Emoji option. Now you have to capture your smiling face. With the help of Machine Learning and Captured data, galaxy s9 creates an animated cartoon character that closely look like you.

1. Animoji is complex yet accurate

Apple uses highly complex TrueDepth camera system to map user’s facial expressions. This TrueDepth camera analyzes more than 50 muscle moments to mirror your expression. Animoji looks more expressive and natural. Some might believe that Apple’s implementation is too complicated yet more accurate.

animoji-1

In comparison, Samsung uses the front camera to create 2D map of user’s face. This behaviour leads AR Emoji not being as reliable as Animoji in capturing user’s facial expression. It also fails to create an exact looking AR Emoji of one’s face in many scenarios. So, that is the downfall of AR Emoji.

2. AR emoji is more Customizable

You can only use those emoji characters which are included by Apple to imitate your facial expression. As of now, there are 12 animoji characters on iPhone X including rabbit, monkey, cat, dog, fox, pig, panda, chicken, and a unicorn, poo, alien, and a robot. Apple will expand this list with bear, dragon, lion and a skull in upcoming iOS 11.3 update.

animoji-2

AR emoji is more customizable compared to Animoji. As shown in picture you can customize its skin color, hairstyle or hair color, sunglasses & clothes. You can also use Mickey and Minnie a popular Disney characters to copy your facial expressions.

3. Animoji is super exclusive

As we discussed earlier, it is hardware that makes accurate expression of animoji possible on iPhone X. Since the hardware is not present on any other phone till date, the only way to create Animoji is with the help of iPhone X.

Since AR Emoji has been highlighted feature exclusive to the Galaxy s9 & s9+, Samsung can easily bring it to more devices. We can see Samsung might bring AR Emoji to the other devices of galaxy series soon.

4. AR Emoji are easy to share

The only way to record and create Animoji is to access iMessage app on iPhone X. You can also record 10 second clip and share with your friends who are on the same platform. You can also share Animoji as GIF on a social media network.

animoji-3

In comparison, it is much easier to share an AR Emoji. Once you’ve created your AR Emoji, It can be saved as collection of 18 GIFs and directly accessible from Keyboard inside any app for easier sharing. You can also share AR Emoji as GIF & PNG.
In conclusion, we can say that the technology behind Animoji is far ahead of Samsung’s AR Emoji. It is more accurate, fluid, expressive and much better animated.

I hope my blog will help you to understand the difference between Animoji & AR Emoji.

Keep reading and keep learning 🙂

A complete guide to App State Restoration in iOS

$
0
0

Overview

App State restoration is for the user to come back to the app as they left it last time before it is a suspended app. Returning an app to its previous app state offers a better user experience and saves time for the user.

Tool Used: Xcode 9.2, Swift 4+

To implement app state restoration you just have to follow below steps.

Step 1: Enable app state restoration

Add this function in AppDelegate.swift file

func application(_ application: UIApplication, shouldSaveApplicationState coder: NSCoder) -> Bool {
        return true
    }

    func application(_ application: UIApplication, shouldRestoreApplicationState coder: NSCoder) -> Bool {
        return true
    }

true value return in the application(_ application: UIApplication, shouldSaveApplicationState coder: NSCoder) function that informs the system to store the current app state of your views and view controllers when the app goes to background and true value return in the application(_ application: UIApplication, shouldRestoreApplicationState coder: NSCoder) function that informs the system to perform to restore state when the app restart.

Step 2: Set restoration Identifiers

Restoration identifiers is a unique string name of any view controller or view that restoration will identify and restore. The restorationIdentifier property can be set either in Storyboard file, Nib file or code.

Using storyboard:
– Use same name of storyboard ID.

Using code:
– Add this code in viewDidLoad function.

self.restorationIdentifier = "HomeVC"

Step 3: UIStateRestoring Protocol

Function encodeRestorableState(_:) call when the app goes to the background for app state is saved and function decodeRestorableState(_:) call when the app to restored in any view controller with a restorationIdentifier.

override func encodeRestorableState(with coder: NSCoder) {
        super.encodeRestorableState(with: coder)
    }

    override func decodeRestorableState(with coder: NSCoder) {
        super.decodeRestorableState(with: coder)
    }

Please see this code in our demo for more clear.

Step 4: UIViewControllerRestoration Protocol

Restoration view controller class must conform UIViewControllerRestoration protocol and the method of UIViewControllerRestoration protocol should be used to return same view controller instance if it exist else return nil.

We have implemented this kind of in our demo.

//  MARK:- UIViewControllerRestoration
extension HomeDetailVC: UIViewControllerRestoration{
    static func viewController(withRestorationIdentifierPath identifierComponents: [Any], coder: NSCoder) -> UIViewController? {
        guard let restoredUser = coder.decodeObject(forKey: "objUser") as? User else {
            print("decoding User Detail")
            return nil
        }

        if let storyboard = coder.decodeObject(forKey: UIStateRestorationViewControllerStoryboardKey) as? UIStoryboard{
            if let vc = storyboard.instantiateViewController(withIdentifier: "HomeDetailVC") as? HomeDetailVC{
                vc.objUser = restoredUser
                return vc;
            }
        }
        return nil;
    }
}

NOTE:

Source: This part of Apple’s docs

The system automatically deletes an app’s preserved state when the user force quits the app. Deleting the preserved state information when the app is killed is a safety precaution. (As a safety precaution, the system also deletes preserved state if the app crashes twice during launch.) If you want to test your app’s ability to restore its state, you should not use the multitasking bar to kill the app during debugging. Instead, use Xcode to kill the app or kill the app programmatically by installing a temporary command or gesture to call exit on demand.

More Reference:

Preserving and Restoring State in Apple’s docs

Welcome DevOps, Prevent defects!

$
0
0

Overview

What is DevOps & Why DevOps:

Its buzzword nowadays, Right? We’re hearing everywhere either it is technical industry or nontechnical. If you wonder about it, then let me explain! Mainly it is a compound of two words: Development and Operations. In simple terms we can say that Development and Operation works better together. The purpose of the DevOps is to improve the relationship by encouraging better communication and collaboration between these two important business units. It is unifying the Development of software with the software Operations.

DevOps is killing the traditional structure! What is the reason behind that?

The traditional structure involves large number of handoffs in process which makes the overall system disorganized.

devops-image1

DevOps uses continuous integration (CI) and continuous delivery (CD).
In CI, developers use various continuous integration tools which integrate whole latest code into a shared storage repository multiple times each day, and DevOps engineer relies on automation to ensure the quality of the version shown as below figure.

devops-image2

And in CD, they are delivering continuously for feedback and reviews, which is displayed in figure below; so we can say that it is the mechanism of the continuous evolution of the software development operations.

devops-image3

Specifically, organization who are ready to create DevOps culture should use JIRA and ZenDesk as bug tracking tools. Test automation tools like Selenium, Cucumber, Junit, TestNG and Jmeter manage to execute the different test scenarios and measure functional efficiency.

Biggest objectives seeking in older structure are time, quality and effectiveness due to hand offs.
Let you all know about the most important 5 things which would be added to the folk skills in any Organization:

1. DevOps Missionary by the Organization management

Master of the DevOps team should be brave enough to face any failure and should keep risk management plan in mind. You have to build a culture that is a learning culture in which if you fail once then learn to prevent next hundred failures from that failure. Organization management must promote the benefits of DevOps by identifying and quantifying the business advantages that come from the greater agility DevOps delivers.

2. Release of the product

Release manager inform the management about the development of the product and providing them the proper statistics about time to next build. Release managers not only keep eyes on the final release product but also oversee the coordination, integration, flow of development, testing, and to support continuous delivery. They also make sure that there are continuous feedback from the client and make the changes accordingly.

3. Automation Everywhere

While accepting the Agile methodology it is necessary to handle the continuous changes and evaluate them. Once you have the complete automated systems, you can also get smooth environment for agile development and continuous evaluation.

By automating deployment, there are five core essential objectives that can be fulfilled by any organization.

  1. Faster to deploy
  2. Extraordinary quality
  3. Higher overall test coverage
  4. Earlier detection of defects
  5. Reduced business expenses

This business essentials give you better result and better quality of product at your predefined time, so how one can deny to use it? The one thing you have do is to gain expertise in automation methodology.

4. Software developer/tester

Any product can be successful or not is majorly dependent on its developer who develops well structured code of product. Software developer is everything in an IT firm. Software developers exhibit their skills while handling the changes in any production, if it is well structured then it will take less time and does not create any dependent issues. So the first thing to handle continuous changes is your code should be easy to modify. In DevOps, developers are not only responsible for developing but also they have to do unit testing for created code. On the other side, sometimes testers think that we can perform manual testing and still can be agile, but through that quality may suffer. How can it be possible if you have to test new build of same product everyday? Product quality can regress with that process. So it is mandatory to move on to automated quality making.

5.Project team coordinator

Project coordinator thinks that his/her role is pushing the team to match up the deadline and review the latest product build with deployment. In DevOps Project coordinator is one who working side by side with developers and testers, inserting their suggestions much earlier in the process. They review and guide for the proper solution and proposal while building the product not,at the end of the build.

devops-image4

By taking new responsibilities, organization can easily understand the true worth of the DevOps.

Happy Testing..!!!


Dynamic Type Fonts

$
0
0

Overview

Using dynamic type font size, user can set preferred text size from device settings and user will see effect on app’s font. It’s a good feature for making apps accessible to users. Rather than setting the font size from code, you can set preferred font for given text style.

There are two ways for setting dynamic type. First, you can do it from the storyboard and second, using code when setting up a view.

Using StoryBoard:

You can make dynamic type by checking the ‘Automatically Adjusts Font’. When user checks ‘Automatically Adjust Font’ the font size changes as soon as user changes preferred size.
The ‘Automatically Adjust Font’ in attribute inspector applies only to only text styles. There is no effect on custom font.

storyboard

Using Code:

The UIFont API provides preferedFontforTextStyle method that returns preferred font for given text style. There are 10 types of default built-in text style. Here is the list of all of our text styles along with their font size at the default text size.

List of Text Style with default sizes:

  1. title1: System 28pt
  2. title2: System 22pt
  3. title3: System 20pt
  4. headline: System 17pt
  5. body: System 17pt
  6. callout: System 16pt
  7. subhead: System 15pt
  8. footnote: System 13pt
  9. caption1: System 12pt
  10. caption2: System 11pt

I used some label in my tutorial and below is the few lines of code for making them dynamic type.

lblTitle3.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.headline)
        lblBody.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.body)
        lblCaption1.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.caption1)
        lblCaption2.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.caption2)
        lblSubhead.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.subheadline)
        lblFootnote.font = UIFont.preferredFont(forTextStyle: UIFontTextStyle.footnote)

Apple added adjustsFontForContentSizeCategory in iOS 10. When it is true the font size updates as per preferred font size.

Custom font with Dynamic Type

I mentioned above that the ‘Automatically Adjust Font’ does not apply on custom font so for custom font, Apple introduced font metric class in iOS 11 that makes it easy. First you get the font metrics for particular text style and then you can scale your custom font using font metrics.

lblCustomFont1.font = UIFontMetrics(forTextStyle: .title1).scaledFont(for: UIFont(name: "Helvetica", size: 10)!)

First you get the font metrics for the .title1 text style and using scaledFont(for method you get the font as per the preferred text size.

lblCustomFont2.font = UIFontMetrics.default.scaledFont(for: UIFont(name: "Helvetica", size: 15)!)

Another one is UIFontMetrics.default.scaledFont(for method that returns font as per preferred text size but by default it takes .body text style.

Run the project and you can show that the labels are styled according to preferred text size. Here are the two screenshots for different preferred text sizes.

font-screenshot

font-screenshot

What’s New In Angular 5 and Angular 6

$
0
0

Overview

Angular is an all-surround framework for JavaScript and based on typescript, that is habitually used by all developers throughout the world for building web, mobile, and desktop applications.

Versions of Angular :-

Version 1.0.0 :- released October 2010,
Version 2.0.0 :- released September 2014,
Version 4.0.0 :- released December 2016,
Version 5.0.0 :- released November 2017

Future Upcoming Version is 6.0.0 in March Or April 2018.
Also Angular 7.0.0 is coming in September 2018.

Speciality of Angular 5 :-

Angular 5 encompasses platform for a build optimizer, Simpler Progressive Web Applications, Angular Universal API and DOM and improvements related to Material Design.

Key attributes of Angular 5 :-

  1. Build optimizer
  2. Simpler Progressive Web Applications
  3. Improved compiler and Typescript
  4. CLI v1.5 will make Angular 5 Projects by default.
  5. New HttpClient
    Example :-
    import { HttpClientModule } from '@angular/common/http';
  6. Angular Universal API and DOM supported.
  7. Improved Material Design.
  8. Router Hooks
  9. Number, Dates and Currency Pipe Updates.
  10. Form Validation method change in Angular 5.
    Example :-
    <input name="Name" ngModel [ngModelOptions]="{updateOn: 'blur'}">
    <form [ngFormOptions]="{updateOn: 'submit'}">
  11. Improvements in Compiler, because you can use benefits of running ng serve with the AOT flag turned on.
    ng serve --aot
  12. Preserve Whitespace, it is defined in each component of decorator and by default is true.
    @Component({
    templateUrl: 'about.component.html',
    preserveWhitespaces: false
    }
    export class AboutComponent {
    }

    Otherwise you can define in your tsconfig.json file, by default is true.
    {
    "extends": "../tsconfig.json",
    "angularCompilerOptions": {
      "preserveWhitespaces": false
    },
    "exclude": [
      "test.ts",
      "**/*.spec.ts"
    ]
    }
  13. New Router Lifecycle Events

Speciality of Angular 6 :-

Angular 6.0.0 beta Version has been released on March 2018 and Stable Version should be released in First Week of April 2018.

Angular beta Version is released with a lot of bug problems fixed and added new features and changes.

Important Key attributes of Angular 6 :-

  1. Bazel Compiler
  2. Ivy Renderer
  3. Closure Compiler
  4. Component Dev Kit (CDK)
  5. Service Worker
    Syntax :-
    (a)
    ng generate universal <name>

    (b)
    ng build --app=
  6. Ng update and Schematics
  7. Updates in Animations,Forms and Router
  8. Add CDK stable Version
  9. Support of Native element
  10. 2.7+ Typescripts
  11. Decorator Error messages are improved.
  12. Changes in ngmodel
    Example :- Previously ngmodel look like,
    <input[(ngModel)]="firstname(ngModelChange)="onChange($event)>

    And
    onChange(value) {
    console.log(value);  
    }

    Now, It’s look like,
    <input#modelDir="ngModel"[(ngModel)]="firstname"(ngModelChange)="onChange(modelDir)">

    And
    onChange(NgModel: NgModel) {
    console.log(NgModel.value);
    }
  13. In FormBuilder multiple Validators are added for array method
    Example :-
    bookForm: FormGroup;
    constructor(private formBuilder: FormBuilder) {}
    ngOnInit() {
    this.bookForm = this.formBuilder.group({
    name: ['', Validators.required],
        options: this.formBuilder.array([], [MyValidators.correctname,  MyValidators.totalCount])
     });
    }

Conclusion : –

That’s it ..

Angular 5 & Angular 6 is faster than earlier version, also size of applications are reduced and supported multiple names, directives and component which is useful to migrate without break changes.

It is more interesting.

Thank You..

A complete guide to Android Instant App

$
0
0

Overview

Android Instant Apps empowers local Android applications to keep running the android application without installing it in the android device.

Base feature module : The major module of your instant app is the base component module. All other element modules must rely upon the base element module. The base component module contains shared assets, for example, exercises, sections, and format records. At the point when incorporated with an instant app, this module constructs an element APK. At the point when incorporated with an introduced app, the base component module creates an AAR document.

Features : At an exceptionally fundamental level, apps have no less than one element or thing that they do: find a location on a map, send an email, or read the day by day news as cases.

Feature Modules : To give this on-request downloading of features, you have to separate your app into smaller modules and refactor them into feature modules.

Feature APKs : Each feature APK is worked from a feature module in your project and can be downloaded on request by the user and propelled as an instant app.

Each feature inside the instant app ought to have at least one Activity that goes about as the passage point for that feature. A section point activity has the UI for the feature and characterizes the general user stream. At the point when users dispatch the feature on their device, the entry-point activity is what they see first. A feature can have in excess of one entry-point activity, but it just needs one.

As you find in the figure, both “Feature 1” and “Feature 2” rely upon the base feature module. Thusly, both the instant and installed app modules rely upon the feature 1 and feature 2 modules. Each of the three feature modules are appeared in figure – base feature, feature 1, and feature 2 have the com.android.feature module applied to their build configuration records.

Step 1 : Install Instant App SDK

Manufacture an Android Instant App, we have to introduce the SDK. Go to Tools >Android > SDK Manager. Tap on the “SDK Tools” tab and introduce “Moment Apps Development SDK” by checking the crate and hitting “Apply”.

android-instant-image1

Step 2 : Android Source Code

In this progression, we will change over the existing application module into a shareable feature module. We will then make an insignificant application module that has a dependency on the recently framed feature. Note that this feature module will be incorporated into the Instant App build targets later.

Convert the app module into a feature module called base-feature :-

We start with renaming the module from ‘app’ to ‘base-feature’:

android-instant-image2

Change Module Type :-

Next, we change the module Compose to Feature module by changing the plugin from com.android.application to com.android.feature and furthermore evacuate applicationId because this is no longer an application module in the base-feature/build.gradle file.

// replace apply plugin: ‘com.android.application’
// with
apply plugin: ‘com.android.feature’

// remove application id
applicationId “com.instantapp.demo”

Specify base feature in the project base-feature/build.gradle

android {
    ...
    baseFeature = true
    ...
}

Create appapk module to build APK file :-

Now that we have changed our source code into a reusable library module, we can make an insignificant application module that will make the APK. From File->New Module

android-instant-image3

Enter application name “app apk”, leave supported module name (instantappdemo)

Replace compile dependencies in appaapk/build.gradle :-

dependencies {
    implementation project(':base-feature')
}

Change it back to “Android view” and remove the application component from appaapk/src/main/AndroidManifest.xml. It should just contain this single manifest component.

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
   package="com.instantapp.demo">
</manifest>

android-instant-image4

We have just moved the application’s core usefulness into a shareable feature module and we are now prepared to start including the Instant App modules.

Creating the instant app APK :-

android-instant-image5

The Instant App module is simply a wrapper for all the feature modules in your task. It should not contain any code or resources.

Create an Instant App module :-

Select File -> New -> New Module

android-instant-image6

Next, we have to refresh the instant app gradle document to rely upon the base feature module.

instantapp/build.gradle

apply plugin: 'com.android.instantapp'

dependencies {
   implementation project(":base-feature")
}

The instant app does not hold any code or resources. It contains just a build.gradle file.

Now must complete a clean rebuild: Build – > Rebuild project.

Defining App Links :-

Select Tools > App Link Assistant

android-instant-image7

Now tap on the “Open URL Mapping Editor” button.

android-instant-image8

Now tap the “+” button.

Create a new URL mapping.
Host: http://yudiz.com
Path: pathPattern, /mainactivity
Activity: .MainActiviy (base-feature)

android-instant-image9

You have to do same process if you have multiple mapping.

Now select the Run configuration dropdown and choose “Edit Configurations…”

android-instant-image10

Now select instantapp under Android App.

android-instant-image11

Replace the text ‘<<ERROR–NO URL SET>>’ with https://yourdomain.com/mainactivity

For mapping on the website browser you have to go and Select Tools > App Link Assistant

android-instant-image8

Now tap on the “Open Digital Assets Links file Generator”.

android-instant-image12

Now enter your Site Domain and Application ID

Then select either Signing config or Select keystore file.

Now tap on the “Generate Digital Assets Links file” button.

android-instant-image13

Now you get the signing file details like above image and you have to upload this file on your domain as a suggested path : https://yourdomain.com/.well-known/assetlinks.json (Your Domain Server must be running with the https(SSL)).

After uploading that on the server you have to hit the “Link and Verify”button.

android-instant-image14

If you are doing something wrong for file uploading on server then it will give you the error like below image.

android-instant-image15

After Verifying successfully. Now select instantapp from the Run configuration dropdown and click Run.

android-instant-image16

You’re done.

Step 3 : Test

Now you can see in the below video that instant app doesn’t get installed and show in the application menu.

Object interaction in ARCore for android

$
0
0

Overview

After a long wait, google has finally released a stable version of ARCore sdk for android.
But it hasn’t disappointed us from functionality point of view. The brand new sdk is capable to detect vertical as well as horizontal surfaces unlike prior couple of developer versions of sdk which were just able to detect horizontal surfaces.

ARCore SDK 1.0

arcode

Latest sdk supports a longer range of devices including Asus and Huawei products. But roses often come with thorns!

Google has imposed a little restriction on memory usage of devices while running AR apps which forces us to use background threads for heavy operations like changing texture of objects at run time.
This is obvious step in order to support devices with lower memory.
In earlier versions of sdk, memory of devices was kind of vulnerable. So, as a conclusion, I’ll consider this positive.

Google has also released emulators to test apps. Isn’t it mind boggling?

simulator

I have kept an eye on ARCore since the day Google released its developer preview version. The sample app provided by Google uses OpenGL to read and load objects. But, it just shows us how to place objects.
As a developer, one cannot stay satisfied with such simple functionality!
As I’m a beginner in OpenGL for android, I searched a lot to add object interaction functionality in the app.

I’ll show you how to rotate, scale and move the objects around and an eye-catching functionality – changing objects at run time.

arcode-gif

Initially, you will need the ARCore sample app provided by Google, an emulator which supports ARCore app or an ARCore supported devices and 3D models with their textures.
I’ll not dive into the deep ocean of ARCore and OpenGl, explaining the very basics. The only thing to keep in mind is, ARCore places points in real world and tracks them. The task of drawing and moving objects is solely based on graphics libraries, in this case: OpenGL.

Loading object into ARCore

Place models and their textures in assets folder.

models

Here, andy and shoes are our two models.
Now, declare two String variables in MainActivity.java to hold the values of object file and texture file. Initialize them as shown below.

private String objName = "models/shoes.obj";
private String textureName = "models/shoes.jpg";

In onSurfaceCreated( ) method, use these variables to create model.

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {

   GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
   try {
       backgroundRenderer.createOnGlThread(getContext());
       planeRenderer.createOnGlThread(getContext(), "models/trigrid.png");
       pointCloudRenderer.createOnGlThread(getContext());
       virtualObject.createOnGlThread(getContext(), objName, textureName);
       virtualObject.setMaterialProperties(0.0f, 2.0f, 0.5f, 6.0f);

   } catch (IOException e) {
       Log.e(TAG, "Failed to read an asset file", e);
   }
}

Note: In onDraw( ) method, change the if-condition to restrict the user to place one object at a time.

if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
   for (HitResult hit : frame.hitTest(tap)) {
       Trackable trackable = hit.getTrackable();
       if ((trackable instanceof Plane && ((Plane)trackable).isPoseInPolygon(hit.getHitPose()))
               || (trackable instanceof Point
               && ((Point) trackable).getOrientationMode()
               == Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
           if (anchors.size() >= 1) {
               anchors.get(0).detach();
               anchors.remove(0);
           }
           anchors.add(hit.createAnchor());
           break;
       }
   }
}

Rotating object

I have used a helper class to detect pinch-to-rotate gesture. Based on requirement, any gesture can be used here.

package com.yudiz.arexample.helpers;

import android.view.MotionEvent;

public class RotationGestureDetector {
   private static final int INVALID_POINTER_ID = -1;
   private float fX, fY, sX, sY;
   private int ptrID1, ptrID2;
   private float mAngle;

   private OnRotationGestureListener mListener;

   public float getAngle() {
       return mAngle;
   }

   public RotationGestureDetector(OnRotationGestureListener listener) {
       mListener = listener;
       ptrID1 = INVALID_POINTER_ID;
       ptrID2 = INVALID_POINTER_ID;
   }

   public boolean onTouchEvent(MotionEvent event) {
       switch (event.getActionMasked()) {
           case MotionEvent.ACTION_DOWN:
               ptrID1 = event.getPointerId(event.getActionIndex());
               break;
           case MotionEvent.ACTION_POINTER_DOWN:
               ptrID2 = event.getPointerId(event.getActionIndex());
               sX = event.getX(event.findPointerIndex(ptrID1));
               sY = event.getY(event.findPointerIndex(ptrID1));
               fX = event.getX(event.findPointerIndex(ptrID2));
               fY = event.getY(event.findPointerIndex(ptrID2));
               break;
           case MotionEvent.ACTION_MOVE:
               if (ptrID1 != INVALID_POINTER_ID && ptrID2 != INVALID_POINTER_ID) {
                   float nfX, nfY, nsX, nsY;
                   nsX = event.getX(event.findPointerIndex(ptrID1));
                   nsY = event.getY(event.findPointerIndex(ptrID1));
                   nfX = event.getX(event.findPointerIndex(ptrID2));
                   nfY = event.getY(event.findPointerIndex(ptrID2));

                   mAngle = angleBetweenLines(fX, fY, sX, sY, nfX, nfY, nsX, nsY);

                   if (mListener != null) {
                       mListener.OnRotation(this);
                   }
               }
               break;
           case MotionEvent.ACTION_UP:
               ptrID1 = INVALID_POINTER_ID;
               break;
           case MotionEvent.ACTION_POINTER_UP:
               ptrID2 = INVALID_POINTER_ID;
               break;
           case MotionEvent.ACTION_CANCEL:
               ptrID1 = INVALID_POINTER_ID;
               ptrID2 = INVALID_POINTER_ID;
               break;
       }
       return true;
   }

   private float angleBetweenLines(float fX, float fY, float sX, float sY, float nfX, float nfY, float nsX, float nsY) {
       float angle1 = (float) Math.atan2((fY - sY), (fX - sX));
       float angle2 = (float) Math.atan2((nfY - nsY), (nfX - nsX));

       float angle = ((float) Math.toDegrees(angle1 - angle2)) % 360;
       if (angle < -180.f) angle += 360.0f;
       if (angle > 180.f) angle -= 360.0f;
       return angle;
   }

   public static interface OnRotationGestureListener {
       public void OnRotation(RotationGestureDetector rotationDetector);
   }
}

We have to implement its OnRotationGestureListener( ) in our main class.

@Override
public void OnRotation(RotationGestureDetector rotationDetector) {
   float angle = rotationDetector.getAngle();
   GlobalClass.rotateF = GlobalClass.rotateF + angle / 10;
}

Declare a static public float variable to store the rotation value, in this case: GlobalClass.rotateF.

Now, in ObjectRenderer.java class, declare and initialize a matrix (4 x 4 array).

private float[] mFinalModelViewProjectionMatrix = new float[16];

In draw( ) method, edit code as shown below.

ShaderUtil.checkGLError(TAG, "Before draw");

Matrix.multiplyMM(modelViewMatrix, 0, cameraView, 0, modelMatrix, 0);
Matrix.multiplyMM(modelViewProjectionMatrix, 0, cameraPerspective, 0, modelViewMatrix, 0);

//rotation
Matrix.setRotateM(mRotationMatrix, 0, GlobalClass.rotateF, 0.0f, 1.0f, 0.0f);

Matrix.multiplyMM(mFinalModelViewProjectionMatrix, 0, modelViewProjectionMatrix, 0, mRotationMatrix, 0);

mFinalModelViewProjectionMatrix = modelViewProjectionMatrix;

GLES20.glUseProgram(program);

Matrix.setRotateM(mRotationMatrix, 0, GlobalClass.rotateF, 0.0f, 1.0f, 0.0f);
Here, I have used the rotation factor to change the matrix of the object along y-axis.
4th, 5th and 6th arguments are for x, y and z axis respectively.

Use the final matrix as in the code below.

GLES20.glUniformMatrix4fv(modelViewProjectionUniform, 1, false, mFinalModelViewProjectionMatrix, 0);

Moving object

This is actually a workaround to translate the object along the surface.
Redrawing the object in onScroll( ) method of surface view does this trick.

@Override
public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX, float distanceY) {
   if (mPtrCount < 2) {
       queuedSingleTaps.offer(motionEvent);
       return true;
   } else
       return false;
}

Here, I have used a counter to calculate number of touches on the surface. If there are less than 2 fingers on the surface, the object will get redrawn on the point. Hence, scrolling gesture will produce a translating effect on object.

Scaling object

To implement this functionality, we have used onDoubleTap( ) listener.

We need to store scale factor in a public static variable, in this case: GlobalClass.scaleFactor.

This factor is used in onDrawFrame( ) method of main activity. This method is executed continuously in fraction of a second.

virtualObject.updateModelMatrix(anchorMatrix, GlobalClass.scaleFactor);

updateModelMatrix( ) is a method of object renderer class which sets the scale of the model.

@Override
public boolean onDoubleTap(MotionEvent e) {
   GlobalClass.scaleFactor += GlobalClass.scaleFactor;
   return true;
}

I have increased the factor’s value in the method.

Changing object at run time

Here comes the coolest functionality. We got a button, clicking on it will change the model. All we have to do is change the values of String variables which we declared for storing model name and texture.

@Override
public void onClick(View view) {
   objName = "models/andy.obj";
   textureName = "models/andy.png";
   isObjectReplaced = true;
}

We have to keep a track of whether the model is changed. isObjectReplaced is used for it.

In onDrawFrame( ) method, we have to add the below code. This will replace the object.

if (isObjReplaced) {
   isObjReplaced = false;
   try {
       virtualObject.createOnGlThread(getContext(), objName, textureName);
       virtualObject.setMaterialProperties(0.0f, 2.0f, 0.5f, 6.0f);
   } catch (IOException e) {
       e.printStackTrace();
   }
   return;
}

Tip: To read model and textures from sd card, replace code in createIOnGlThread( )

File dir = new File(<path_to_file>);
FileInputStream objInputStream = new FileInputStream(dir);
Obj obj = ObjReader.read(objInputStream);

Here, at Yudiz, we are concentrating on advanced ARCore topics like managing multiple objects simultaneously, selecting models using touch gestures.

Conclusion

ARCore is Google’s answer to Apple’s ARKit. I personally think that it will dominate in AR field as it has great potential.

Getting started with Selenium

$
0
0

Overview

Selenium is a suite of softwares’, each serves to different testing needs of an organization. It has 4 components.

  • Selenium Integrated Development Environment (IDE)
  • Selenium Remote Control (RC)
  • Selenium WebDriver
  • Selenium Grid

Introduction to Selenium Webdriver:

WebDriver is a web automation framework. It permits you to execute your tests against completely different browsers, not just Firefox (unlike Selenium IDE).

WebDriver also allows you to use a programming language for creation of your test scripts (not attainable in Selenium IDE).

  • You can now perform conditional operations like if-then-else or switch-case
  • You can also use looping like do-while.

Following programming languages are supported by WebDriver

  • Java
  • .Net
  • PHP
  • Python
  • Perl
  • Ruby

Download and Installation:

Step 1 – Install Java on your computer:

Download and install the Java Software Development Kit (JDK)

selenium-image1

selenium-image2

JDK version comes bundled with Java Runtime Environment (JRE), so you don’t have to download and install the JRE separately.

Step 2 – Install Eclipse IDE:

Download “Eclipse IDE for Java Developers”, an exe file named “eclipse-inst-win64”

selenium-image3

selenium-image4

selenium-image5

selenium-image6

selenium-image7

Step 3 – Download the Selenium Java Client Driver:

Download the Selenium Java Client Driver. Many client drivers for other languages are found there, but the one for Java should be chosen.

selenium-image8

Step 4 – Configure Eclipse IDE with WebDriver:

Create a new project by clicking on File > New > Java Project. Name the project as “demo project”.

A new pop-up window will get opened, enter details as follow :

  1. Project Name
  2. Location to save project
  3. Select an execution JRE
  4. Select layout project option
  5. Click on Finish button

selenium-image9

selenium-image10

In this step,

  1. Right-click on the newly created project and
  2. Select New > Package, and name that package as “demo package”.

selenium-image11

A pop-up window will get opened to name the package.

  1. Enter the name of the package
  2. Click on Finish button

selenium-image12

Create a new Java class under new package by right-clicking on it and then selecting- New > Class, and then name it as “MyClass”.

selenium-image13

selenium-image14

selenium-image15

Now add selenium JARs to Java Build Path
In this step,

  1. Right-click on “demo project” and select Properties.
  2. On the Properties dialog, click on “Java Build Path”.
  3. Click on the Libraries tab, and then
  4. Click on “Add External JARs button.”

When you click on “Add External JARs..” A pop-up window will get opened. Select all the JAR files you want to add.

selenium-image16

Select all jar files which are inside the lib folder.

selenium-image17

selenium-image18

Select jar files which are outside lib folder

selenium-image19

Add all the JAR files which are inside and outside the “libs” folder.

selenium-image20

How to identify Web Element:

To locate elements in Webdriver, we can use “findElement(By.locator())” method.

Locators are the HTML properties of a web element by which Selenium locates the web element on which it needs to perform the action.

selenium-image21

Selenium WebDriver Commands:

Opening a URL:-
Using Get method

  • Selenium has driver.get() method which is used for navigating to a web page by passing the string URL as parameter.
  • Syntax: driver.get(“http://google.com”);

Clicking on web element :-
The click() method in Selenium is used for performing the click operation on web elements.

//Clicking an element directly
	driver.findElement(By.id("button1")).click();

//Or by creating a WebElement first and then applying click() operation
	WebElement submitButton = driver.findElement(By.id("button2"));
	submitButton.click();

Writing in a textbox :-
The sendKeys() method is used for writing in a textbox or any element of text input type.

//Creating a textbox webElement
	WebElement element = driver.findElement(By.name("q"));

//Using sendKeys to write in the textbox
	element.sendKeys("ArtOfTesting!");

Clearing text in a textbox :-
The clear() method is used to clear the text written in a textbox or any web element of text input type.

driver.findElement(By.name("q")).clear();

Fetching text written over any web element :-
We have getText() method in selenium webDriver for fetching text written over an element.

driver.findElement(By.id("element123")).getText();

Navigating backwards in a browser :-
Selenium provides navigate().back() command for moving backwards within the browser’s history.

driver.navigate().back();

Navigating forward in a browser :-
Selenium provides navigate().forward() command for moving forward in a browser.

driver.navigate().forward();

Refreshing the browser :-

  • There are multiple ways for refreshing a page in Selenium WebDriver-
  • Using driver.navigate().refresh() command
  • Using sendKeys(Keys.F5) on any textbox on the webpage
  • Using driver.get(“URL”) with current URL
  • Using driver.navigate().to(“URL”) with current URL

//Refreshing browser using navigate().refresh()
	driver.navigate().refresh();

	//By pressing F5 key on any textbox element
	driver.findElement(By.id("id123")).sendKeys(Keys.F5);

	//By reopening the current URL using get() method
	driver.get("http"//artoftesting.com");

	//By reopening the current URL using navigate() method
	driver.navigate().to("http://www.artoftesting.com");

Closing the browser :-
Selenium has two commands for closing browsers: close() and quite(). The driver.close() command is used for closing the browser having focus. Whereas, the driver.quite command is used for closing all the browser instances open.

//To close the current browser instance
driver.close();

//To close all the open browser instances
	driver.quit();

Basic Script:

package demo;
import org.openqa.selenium.*;
import org.openqa.selenium.chrome.*;

public class firstscript
{
	public static void main(String[] args)
	{
		 System.setProperty("webdriver.chrome.driver","/home/yudiz/eclipse-workspace/facebook/lib/chrome driver/chromedriver");
		driver = new ChromeDriver();
		driver.get("https://www.facebook.com/");
		driver.quit();
	}
}

Conclusion:
You might find script writing as lengthy as writing test cases but my friend, the scripts are reusable! You don’t need new scripts all the time, even if the version of the OS on the device changes. It allows you to redo the test exactly the same, without forgetting any steps. In the end you will have a better quality software which will be released earlier, with less problems and having less resources used. And it’s FUN after all! Happy Testing to You! 😀

Introducing Magic Leap With Unity3D

$
0
0

Overview

Magic leap is adding another dimension to computing where digital respects the physical. They work together to make life much better. Magic Leap One is built for creators who want to change how we experience the world. It drags digital world into the real world. Simply it gives a mixed reality experience to us. Before we take step forward into mixed reality, we have to be clear with these three terms : Virtual reality, Augmented reality and Mixed reality.

What is the difference between Virtual Reality, Augmented Reality and Mixed Reality?

To cut a long story short, here’s the difference between virtual, augmented, and mixed reality technologies:

  • Virtual reality (VR) immerses users in a fully artificial digital environment.
  • Augmented reality (AR) overlays virtual objects on the real-world environment.
  • Mixed reality (MR) not just overlays but interacts with your real world. It anchors virtual objects in the real world.

Begin exploring

To get started you need to download the Unity engine tool for Magic Leap Technical Preview and Magic Leap Lumin SDK.

Lumin Sdk :

https://creator.magicleap.com/downloads/lumin-sdk/unity

Unity Technical Preview :

https://beta.unity3d.com/download/94d3b60453d2/UnityDownloadAssistant.dmg?_ga=2.152723193.1759123696.1522124091-240620625.1522124091

Magic Leap Setup

Step 1 : Install required packages from Magic Leap Package Manager

Download and install the latest Magic Leap Package Manager. Lumin SDK has a lot of magic stuff and tools to help create experiences and apps. We don’t have to cover all that now, so, let’s continue with all this marked packages.

leap-setup

Step 2 : Generate, decorate and experience virtual rooms

With the help of Magic Leap Simulator, You can see your quick changes to Unity Editor without deploying to Magic Leap Headset Devise. This simulator covers the major Lumin SDK features. After creating your own room using Virtual Room Generator, load your room in Magic Leap Simulator. Here you can fully explore your room and test it.

In the Magic Leap Remote – Simulator Mode window

  1. Click Start Simulator
  2. In the Simulator window, Click the ☰ menu (the one in the Mini Map window) – Click Load Virtual Room
    leap-setup
  3. Navigate to VirtualDevice\data\VirtualRooms\ExampleRooms in your Lumin SDK installation folder.
  4. Select an example room of choice
    leap-setupYou can also generate your custom room with Virtual Room Generatorleap-setup

Unity Setup

Magic Leap with Unity

Hope you are ready with Unity 2018.1.0b8 as it’s essentially required for magic leap demonstration.

Let’s get a new Unity project setup using the Magic Leap template. Templates are a new feature in Unity’s project launcher.

  1. Open the Unity 2018.1.0b8-MLTP1
  2. Create new project
  3. Set project name and location
  4. [New!] Template: Magic Leap
  5. Select Create project

unity-setup

Project Setup

Scene

  1. GameObject > 3D Object > Cube
    scene

Build Settings

  1. File > Build Settings
  2. Under the Platform section
    1. Select Lumin OS
    2. Click the Switch Platform button below
    3. Set the Lumin SDK Location path. For example, /User/(User Name)/MagicLeap/mlsdk/v0.11.1
  3. Close the Build Settings windowbuild-settings

Zero-Iterating

Let’s enable Zero Iteration.

  1. Click the Magic Leap menu at the top of Unity
  2. Click Enable Zero Iteration
    zero-iteration
  3. An Editor Restart Required popup will appear
  4. Click Restart
  5. Unity will now restart in OpenGL mode

Let’s zap our cube over to the Simulator window.

  1. Unity click Play to enter Play Mode
  2. Our cube now appears in the Eye View window pane of the Simulator
  3. Use WASD and mouse controls to move around and view the cube from different angles

simulator

Remember: The simulator should always be started before pressing Play

As you can see, cube is in the real world but it is not still interacting with real world (here virtual world). You can not drag your cube behind any sofa or under the table or anywhere you want. It will still look like AR. To interact with real world you also need mesh real environment in your Unity. If it is possible then it’s not big deal to put your cube wherever you want in the real world. Yes, Magic Leap provides this functionality to import meshes of real world in Unity. And here we go to drag meshes in Unity.

Meshing in Unity

Meshing is a feature of World Reconstruction that detects real world surfaces and constructs a virtual mesh around those objects.

  1. Import the Magic Leap Unity Package (Path : /Users/UserName/MagicLeap/tools/unity/v0.11.1).package
  2. Create an empty GameObject, name it MLSpatialMapper.
  3. Attach an MLSpatialMapper Script (AR->Magic Leap->Spatial Mapper) to the MLSpatialMapper GameObject.
  4. Create the mesh prefab.
    1. Select MLSpatialMapper and create a child GameObject, named Original.
    2. This object acts as a template for creating the meshes. At runtime, MLSpatialMapper will create meshes and set them in the Mesh field of the MeshFilter attached to the Original GameObject.
    3. Disable the Original GameObject.
    4. Remove any transform offsets.
    5. Add a Mesh Renderer component.
    6. Add a Mesh Filter component
    7. Add a Mesh Collider component
      mesh-collider
  5. Launch the application using the Magic Leap Remote(see the Prerequisites). The mesh should appear shaded pink (since no materials have yet been applied).
  6. Now let’s apply materials to the mesh.
    1. Create a new material, and name it Wireframe. Assign the Wireframe shader (VR->SpatialMapping->Wireframe).wireframe
    2. Create a new material, and name it Occlusion. Assign the Occlusion shader (VR->SpatialMapping->Occlusion).occlusion
    3. Assign both materials to the Original GameObject’s MeshRenderer.
  7. Create one Cube and add Rigidbody component to it.
  8. Launch Simulator and click on play button in Unity.

You will see meshes are derived in scene view in Unity. And your Cube is now interacting with real world. You can put your cube wherever you want in environment.

Now your game characters can come in real world and they can do whatever with respect to real world. Here you can see one demo game in which player is playing with ball in real environment.

Credit:

It was combined efforts of Damini Hajare, Bhavesh Savaliya and Vaibhav Vasoya.

Digital Painting

$
0
0

Overview

We do not own skills when we are born, it comes by time & the catalyst of getting any skills is dependent on our surroundings and situations. So my point is, nobody is great or skillful by birth, it’s a learning process which takes time to achieve, that means if you are passionate and hard working then you can definitely achieve any skill that you desire.

I stated above points because I’ve faced the frustration and jealousy of not able to learn Digital Painting but slowly I tried to console myself that, I should keep patience. It takes time to learn something new, everything has their own pace.

Moving on, our topic of interest being Digital Painting, I am going to explain some very basics on Digital Painting hoping that will help you to get rid of digital painting phobia.

“During my digital painting journey, the funny part was I used to tell my friend, that ctrl + alt + z has become my best friend while using Photoshop” (we’ll talk about above statement at the end of this blog until then enjoy the simplicity of digital painting).

Firstly, what is the ideal document size for painting anything in the Photoshop?. Well, I keep width x height: 3000 x 2000 (or vice versa) and Resolution should be 300 Pixels/Inch. and Color Mode should be RGB/CMYK depending on your purpose of painting. These are very ideal document settings for painting HD stuffs.

Understanding the Photoshop UI

I hope you all may already know but still keeping it simple on left hand side those little icons are tools that are used for various purposes and for painting purpose I will recommend: Brush Tool (B), Eraser Tool (E), Smudge Tool, Paint Bucket and Gradient tool (G), Magic Wand tool (W) & Lasso Tool (L).

Woah! so many of them already. Now on the right hand side the very bottom space are for the layers. Very top of that is Navigator (if it is not there then you should go to the window bar and select it from there), In between both should be Color cube, see on the right hand side, tools can be customized according to the users. Above I only mentioned about important stuffs that should be present in the right hand side. And If you press f5 you will get Brush bar where you can see different settings of that selected brush.

digital-image1

digital-image2

Understanding the layer mode while painting shadows and highlights:

You can see layer Mode from the top 2nd row of the layer bar. “Once a great artist said you should use yellow color for highlights and blue color for the shadow.” But it depends on the artist that how he/she manages to play with the light and the shadow. If you want to keep it simple then use the way I mentioned above. (but don’t forget to change layer mode)

Keeping it straight, changing layer mode to multiply for shadow. The mechanics behind is that it doubles the base selected color making it more darker tone. And the overlay mode is for giving highlights. The mechanics behind is that it brightens the base color making it more lighter tone. Now the very important part, you should make separate layers for each Mode that you are going to use that’s how it works.

There is another advance technique which allows you to work on the single layer (I will talk about this in my next blog).

digital-image3

digital-image4

Understanding Flow and Opacity:

Now I will explain this in a very simple manner suppose you have toothpaste in your hand which has unlimited amount of paste in it and you are squeezing it more and more and your are getting the paste more and more basically the Flow works in the same principle, in common traditional words, the amount of color pigment you want. And Opacity is like limiter to that flow power. Both are like powers to the Brush tool while using it on Photoshop.

Understanding the Brush tool:

Now most of you think after seeing an awesome painting that artist must have used so many different brushes but (i want to laugh so hard, sorry for that), don’t take pressure about using different brushes to paint something great, I will give you an example there is an artist name Yue Wang or you may know her as sakimichan she uses only hard round brush to paint her awesome art-pieces (well there is a paradox too).In the brush tool bar which is brush preset (f5) there are so many different features that we can give to any brush. I will point some of the important settings you can give to any brush in the Brush preset, 1st option is Shape Dynamics: right now only focus on Size Jitter ignoring the word Jitter just below it you can see control that should be on Pen Pressure basically it determines the thickness of the brush strokes which is helpful for creating line-art for your drawings, Next is Transfer what I said about Flow and opacity so here it is, from here you can change opacity based on pen pressure and last is Smoothing that should be ‘on’ always (it’s about anti-aliasing). These three options are base settings for a simple brush. Making your own Brush is also a simple business, just make any pattern you want on the canvas, then go to the edit option on the top there you can see define brush preset option click on that. Here you go play with your presets on your newly made brush.

digital-image5

digital-image6

digital-image7

Now ending this here above I mentioned ctrl + alt + z, well when you do mistakes in traditional painting it’s tough business to deal with it, but in case of digital painting if you do mistakes you know what to do simply redo that action

And few thing I want to say to all beginner artists:

Do not try to make your masterpiece in your initial stage, just focus on learning.

In your journey do not get frustrated by seeing others works my point was do not
compare yourself you have your own pace of learning things, so be patient.

Learn from others, get inspired from others, do not get jealous from others.

And keep practicing.

There are lots of things to say, to explore. until next time !.


Introduction of ECS (Entity Component System)

$
0
0

Overview

This article explains you about the Unity ECS (Entity component System) and why to use Unity ECS.

Introduction to Unity ECS

The New Implementation of Entity Component system is a design pattern mostly used in game development consisting of entities, components and system.

Entity – a collection of components which are usually implemented as objects consisting unique ID.

Component – A container of data only.

System – defines the game behavior and contains logic.

Unity framework is shifting from the old entity component system to more modern data oriented entity component system which will make code reuse easier. ECS leverages the C# Job system and Burst compiler which allows you to take the full advantage of today’s multi core processors.

There are different frameworks used for ECS like:

  • Entitas (C#)
  • Artemis (C#)
  • EgoCS (C#)
  • EntityX (C++)
  • Anax (C++)

Why Unity ECS?

Unity ECS is an interesting approach not only to design your game code but also many other features that make up the unity engine from physics simulation to graphics rendering.

There are two forms of Unity that defines different approaches with mono Behavior:

  1. Pure ECS
  2. Hybrid ECS

In Pure ECS, the entities are the new Game Objects and there are no more mono Behaviors i.e., data is stored in components and Logic in systems. It also utilizes the new C# job system which gives performance benefits with the help of Burst Compiler and Multithreading.

In Hybrid ECS, it includes all the features of Pure ECS and also includes special helper classes which convert Game Objects into entities and mono behaviors into components.

Unity ECS makes the performance much better through efficient machine code for all platforms, which includes the following key features:

Optimized Data – whenever the code is written using pure unity ECS, your component data is guaranteed to be stored linearly in memory. Your system will access entity components in the most optimal possible ways.

Multithreaded code –utilizes C# job system, which allows you to write multithreaded code in safe and simple way. This allows you to run the systems in parallel and utilizes cores in the processor.

Burst Compiler – This compiler is a new LLVM (Low Level Virtual Machine) based math – aware backend compiler technology which takes C# jobs and produces highly-optimized code taking advantage of the particular capabilities of the platform you’re compiling for. This compiler is so powerful that it renders so many instances (e.g. 24K Instances) of the same model.

Example: Render Massive number of skinned meshes in Unity for10K instances at an average 185 fps i.e., more than 850 triangles per second.

ecs

So, this is just an introduction part of Unity ECS. Stay Tuned for more advanced post on similar topic.

Is “Flutter” Google‘s reply to Facebook’s “React-native” ?

$
0
0

Overview

We all know that mobile users expect their apps to have beautiful UI , Smooth Animation and Great Performance. To develop such features, developers have to build apps which work faster without compromising quality and performance.

There is a way we can easily built it with the help of “Flutter” Framework that has been introduced by Google.

Flutter is “Open Source” Framework which provides expressive and fast way for developers to build native apps for both iOS and Android.

The First version of Flutter was known as “Sky” and was run on android operating system. The original author of Flutter is Google and is developed by Google and Community. Flutter was Initially released in May 2017 and is Written in “Dart” Language .

Why Should We Use FLUTTER ?

flutter-image1

Top Features :

  • Flutter uses Dart as its core development language.
  • Dart is developed by Google and it is used to build web, server and mobile applications and for Internet of Things devices. It is similar to Java, C++ or C#. So, learning Dart will not be a major issue.
  • Easy Firebase integration: Flutter provides a plugin for firebase integration which lets users to painlessly integrate with a remote database that allows real time sync.
  • Hot reload: The wait for a miraculous build feature to re-run the app without rebuilding it, is now over!
    Flutter has a button called Hot reload. This is my most favorite feature of Flutter.
    This is Hot reload which works by injecting updated source code files into the running Dart Virtual Machine (VM).
    This will re-run the app in mere 2-3 seconds of time without restarting it and preserving the state of the app.

flutter-image2

There are 3 ways where Flutter will help you:

1.-Fast development and re-development :

  1. Flutter is engineered for high development velocity, with the help of hot reload feature we are allowed to change our code and see it come to life in less than a second without losing the state of the app.
  2. Flutter also have rich set of customizable Widgets, all build from morden reactive Framework .
  3. Flutter integrates with popular development tool (Editor or IDE) like IntelliJ , Android Studio, Visual Studio and many more.

2.Flexible UI with Expressive Features:

Flutter moves the Widgets, Rendering, Animation and Gestures into the framework to give you complete control on screen and within every pixel of the screen. This means you have flexibility to build Custom design and many more.

3.Native Apps For IOS and ANDROID (Compatibility):

Apps made with flutter follow the platform conventions and interface details such as scrolling navigation icons, fonts and more.

That is the reason the Apps built with flutter are featured in both Appstore and Google Playstore.

Flutter is great Revolution for both new and experienced developer. If you are new to Mobile Apps development, Flutter gives you the fast, fun and morden way to develop Native Apps , and if you are an existing mobile apps developer then you can easily integrate flutter with existing tools to build new apps with expressive User Interface.

React native Vs Flutter (Is Google’s Flutter or Fighter?)

flutter-image3

Flutter doesn’t use any text Stack which is popular on the Internet, while on the other hand React native uses React and Javascript.

Flutter is focused on single code base and this single code base will be producing iOS and android apps. They use their own language “Dart”.

Flutter comes with built-in cupertino and material design, whether you are moving to complete iOS experience or Android experience both will be relatively easy you don’t have to use third party libraries like we use in react native.

Flutter = Beautiful native App in realtime.
React Native = Built native app using Javascript and React.

flutter-image4

There are two parts in architecture : javascript and native, application run in javascript and when it has to communicate with the device (for eg. Touch events, messages etc.) it goes to the bridge, that’s what makes react native very power-full but slow. (Bridge converts Javascript variable into native variable).

Sometimes when you are doing animation it becomes very slow. For eg, if you want a drag and drop animation for a smooth UI it needs 60 fps, but you won’t get full 60fps while you are changing the value from javascript to native code.

React Native has a lot more libraries and resources than Flutter that is the one of the best advantage.

Java-Script is universal and it has lot more support than dart.

Top Apps made in React Native : Facebook, Instagram , Myntra , Tesla etc.

flutter-image5

There is no bridge, so there are 2 languages : dart and c++, Dart is an extension in which flutter is written and it compiles back to the machine code which directly runs in the hardware.
The communication between application and OS is as minimum as possible. Most of the things is taken care by flutter itself or Skia Engine.

It is like a browser in-built. Flutter can give you better key frames when it comes to application.

Currently there are few developers which know or have knowledge of dart compared to javascript (or React-Native).

Top Apps made in Flutter : Hamilton App, Flutter Gallery

So, “What’s different about Flutter?”

  • Flutter is predictable, Fast and smooth code compiles AOT to native (ARM) code.
  • Comes with customizable, beautiful, widgets
  • Full Control of Developer over Widgets and Layout.
  • No “Javascript Bridge” with Better Reactive Views.
  • The wonderful Hot Reload Feature with amazing developer tools.
  • Better compatibility, Better performance, Better fidelity , Better control with Great fun.

Isn’t It great!!?

flutter-image6

Let’s dive into some practical scenarios:

import 'package:flutter/material.dart';

void main() => runApp(new MyApp());

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    var title = 'Web Images';

    return new MaterialApp(
      title: title,
      home: new Scaffold(
        appBar: new AppBar(
          title: new Text(title),
        ),
        body: new Image.network(
          'https://github.com/flutter/website/blob/master/_includes/code/layout/lakes/images/lake.jpg?raw=true',
        ),
      ),
    );
  }
}

The above example shows how easy it is to fetch an image from internet and to load it into a view.

Let’s understand this example in detail. There is a method main( ) which starts the app using MyApp( ) class.
This class extends StatelessWidget.
There are Stateless and Stateful widgets in flutter. Stateless widgets are used where the state should not be changed in runtime, for eg. a screen. Stateful widgets are used where we need to change the state like changing color of button, changing text and so on.

Flutter highly concentrates on material app design.
Here, one widget is created having a title, home and body.
Title is used in toolbar.
Home is a Scaffold object. Scaffold basically means structure. It creates complete structure of the screen.
Body includes a view to load image. Here, with a single line we have fetched an image from network and have loaded it into the view.

We, here at Yudiz, have started to dive into this miraculous and super interesting framework and soon will produce beautiful apps using it.

Conclusion

Flutter is a huge step forward into app development world and I’m quite sure that this will take all the current frameworks and app development methods by storm.

Custom Pull-To-RefreshView: Swift 4

$
0
0

Overview

Hello Everyone. Hope you doing well and having a nice day. In this article, we are going to focus on creating the custom pull to refresh control. It was introduced several years ago, created by popular Twitter Client Loren Brichter. The idea is very simple just pull down to refresh the content of a UITableView or UICollectionView, even UIScrollView too. Lots of application use pull to refresh with cool and nice animation, especially in Snapchat.

So I decided to create my own custom pull to refresh (and sharing is caring right). I love to do complex thing in simple way so everyone can understand easily even for beginner and this one is really simple, believe me! In iOS 6 Apple made pull to refresh very easy with UIRefreshControl.

The following video shows our article’s demonstration.

Let’s get started with the fun part. I’m assuming that you are familiar with project setup.

Designing

Command + N.
custom-pull-image1

Feel free in naming the file and saving as per your choice.

After creating xib, first in an attribute inspector, change size to freedom. It will allow to resize the xib easily for designing.

custom-pull-image2

For article purpose, I’ve designed Xib as below:

custom-pull-image3

That’s it for designing. We have completed our 50% of work here now let’s jump to ViewController.swift file.

Coding

We are going to declare couple of variables.

var refreshView: RefreshView!
. “RefreshView” is class name which I have given to above xib. Variable is for not to initialize but further down we are going to do it in getRefereshView() method.

Below is the compute variable of UIRefreshControl, along with clear background, tintcolor and adding the target method of it.

var tableViewRefreshControl: UIRefreshControl = {
        let refreshControl = UIRefreshControl()
        refreshControl.backgroundColor = .clear
        refreshControl.tintColor = .clear
        refreshControl.addTarget(self, action: #selector(refreshTableView), for: .valueChanged)
        return refreshControl
}()

Now we need the xib from the Bundle. The Bundle will return an Array of nib as Any. We need to downcast with the first object with using if statement as RefreshView. In the below method refreshView frame is given according to tableViewRefreshControl and lastly adding refreshView as addSubview of tableViewRefreshControl.

func getRefereshView() {
     if let objOfRefreshView = Bundle.main.loadNibNamed("RefreshView", owner: self, options: nil)?.first as? RefreshView {
        // Initializing the 'refreshView'
        refreshView = objOfRefreshView
        // Giving the frame as per 'tableViewRefreshControl'
        refreshView.frame = tableViewRefreshControl.frame
        // Adding the 'refreshView' to 'tableViewRefreshControl'
        tableViewRefreshControl.addSubview(refreshView)
     }
}

Targeting method refreshTableView will be called when user perform pull to refresh. In this project and particularly for the article purpose I’ve started animation of company logo that will last for 5 seconds and then tableView will be back to normal state.

@objc func refreshTableView() {
     refreshView.startAnimation()
     DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
        self.refreshView.stopAnimation()
        self.tableViewRefreshControl.endRefreshing()
     }
}

At last, prepareUI method needs to be added in viewDidLoad.

func prepareUI() {
    // Adding 'tableViewRefreshControl' to tableView
    tableView.refreshControl = tableViewRefreshControl
    // Getting the nib from bundle
    getRefereshView()
}

Don’t forget to add tableView and its delegate and dataSource. Now run the code. See the magic of your work.

Oh Yes!. if you wondering the animation of below highlighting the company logo. Please take a look at code.

custom-pull-image5

Google I/O 2018 : Announcements that matter

$
0
0

Overview

Here are some interesting highlights that you need to know about the most awaited event Google I/O 2018

Smart Compose in Gmail to make it better

In the next days Google will provide a smart compose to Gmail, in the smart compose google will suggest you with some better keywords like name of the receiver or next word suggestions etc.

Check this video for how to demonstrate Smart Compose for Gmail

Suggested Action in Google Photos

In the future google will update the Google Photos application, the main benefits of the updated google photos is that the google will identify the people in pictures you have clicked and will suggest you to share with identified person and with only one click your photo will be shared with person who is in picture.

Next update of google photos is that if you have some old memories in black & white then with just a simple click the google will color that photo.

Waymo’s self driving car: Look like a real Car

The best thing in Waymo’s self driving car is that it looks like a real car not like an autonomous car, it will be a great future for the driverless cars.

Waymo will be trying to put more than 20,000 cars in next few years. It’s really great news announced in google keynote.

Google Duplex : An assistant to handle your calls

In the next few months, google duplex will help you to handle your calls. The best thing in google duplex is that it will talk like a real human with smart intelligence.

In the below video you can see the example of how duplex will call a hair parlour to book an appointment.

Google Maps are just Awesome now

As we all know the best product of google is Google MAP. Now the google Map will be just awesome by integrating AR and Google Lens in the google map.

Sometimes you can’t decide the direction on google map just because of rotation of your phone right? Now the solution is here.

Now let’s see how google lens and AR will work with google maps. After this update you just have to put your camera to the street view or road or whatever else the maps will show you the direction for your destiny.

One more thing google map will add is the FOR YOU tab in the the google Map application. Now google will be able to automatically add some shops, business and buildings in the map. For You tab will suggest you few things that you need. It will suggest you some shops, buildings and businesses which you need.

Android P beta will be rolling out

Now Google will be making Android P more accessible. Google will be launching beta Android P for its google pixel phones.

Also there are some updates that google announces is listed below.

  • Smart Intelligence feature in Android P
  • Google changes the navigation design in Android P
  • Fixed Some changes in Volume Slider and Screen Rotation

Some extra features like Android timers will help you to stay away from your phone after some specified limit. You can set a time limit for particular android application. As your time limit expires, The app icon color will change for the rest of the day and suggest you to reduce usage of that particular application.

Google Smart Display

According to Google IO announcements, Google will launch their Smart Display in July 2018.

Google Smart Display will help you from staying in touch with Family with broadcast and video calling, to keeping your eye at your home with all of their smart home partners.

New Google News

Google news are rolling out in Android, iOS and Web in 127 countries.

New Google News will help you out for the three main features:

  • Keep up with news you care about
  • Understand the full story
  • Enjoy and support the news sources you love

Google Lens are available for more phones

Google Lens will be available in the Android device soon with the default android camera. So you don’t need Google Pixel or any heavy device.

Right now google Lens work in Android phones through google photos. But the company expected something more. Google want to launch personal smart software to the forefront at IO.

MongoDB Query Optimization Techniques

$
0
0

Overview

MongoDB is an open source database that uses a document-oriented data model and a non structured query language. MongoDB is built on an architecture of collections and documents.

When you’re programming an application, you generally want the database to respond instantly to anything you do.
Performance optimization is required when your data reaches at highest limit or due to long query, it harms the execution time.
I hope these simple tips will help you avoid the pain I went through!

1. Add Index on Collection:

If your application queries a collection on a particular field or set of fields, then an index on the queried field or a compound index on the set of fields can prevent the query from scanning the whole collection to find and return the query results.
You can set order an index field like for Ascending: 1 and Descending: -1.
I will give you an example for how to add index on collection with appropriate order.
MongoDB support many types of an index that can be used in collection.

Single Index:

I have assumed like you have one collection that store the user information.
Indexes are created with createIndex() function .The most basic command to index the email field in the user collection in ascending order:

db.users.createIndex( { email: 1 } )

If your collection have the object field like an address that store the information like city, state and country then you add the index like below.

db.users.createIndex( { "address.city": 1 }

Single index doesn’t matter with the field sequence of collection.

Compound Index:

This index are always created with minimum two fields from the collection.
For example, below index was created with ascending order on fullName and userName field.

db.users.createIndex({fullName:1, userName:1})

MongoDB have to limit for compound index only on 31 fields .
The order of the fields listed in a compound index is important.

Text Index:

If you need to search for text or array fields, then add text index.
Text indexes can include any field whose value is a string or an array of string elements.

db.users.createIndex( { comments: "text" } )

Unique Single Index:

A unique index ensures that the indexed fields do not store duplicate values

db.users.createIndex( { "userId": 1 }, { unique: true } )

Unique Compound Index:

You use the unique constraint on a compound index, then MongoDB will enforce uniqueness on the combination of the index key values.

db.users.createIndex( { mobile: 1, lastName: 1, firstName: 1 }, { unique: true } )

2. Aggregation Pipeline Optimization:

The Aggregation Pipeline consists of many stages and each stage transforms the documents as they pass through the pipeline. Aggregation is always used for getting the result from multiple collection and each collection have stored a reference.

I will share with you multiple tips for getting the best result from aggregate query.

Projection Optimization:

Add only require fields from the collection and reducing the amount of data passing through the pipeline.

For Example:

$project:
{
  fullName :1,
  email :1,
  address: 0 
}

Pipeline Sequence Optimization:

Always maintain the sequence like stage $match + $projection, $match1 + $projection1 and so on in queue. The sequence has reduced the query execution time because data are filtered before going to projection.

$match and $sort:

Define index to match and sort field because it uses the index technique.
Always add $match and $sort on an aggregation first stage if possible.

$sort and $limit:

Use $sort before $limit in the aggregate pipeline like $sort + $limit +$skip.
$sort operator can take advantage of an index when placed at the beginning of the pipeline or placed before the $project, $unwind, and $group aggregation operators.

The $sort stage has a limit of 100 megabytes of RAM, So use allowDiskUse option true to not consume too much RAM.

$skip and $limit:

Always use $limit before $skip in aggregate pipeline.

$lookup and $unwind:

Always create an index on the foreignField attributes in a $lookup, unless the collections are of trivial size.
If $unwind follows immediately after $lookup, then use $unwind in $lookup.

For example

{
        $lookup: {
            from: "otherCollection",
            as: "resultingArrays",
            localField: "x",
            foreignField: "y",
            unwinding: { preserveNullAndEmptyArrays: false }
    }
}

AllowDiskUse in aggregate:

AllowDiskUse : true, aggregation operations can write data to the _tmp subdirectory in the Database Path directory. It is used to perform the large query on temp directory. For example

db.orders.aggregate(
    [
            { $match: { status: "A" } },
            { $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
            { $sort: { total: -1 } }
    ],
    {
            allowDiskUse: true
    },
)

3. Rebuild the index on collection:

Index rebuilds is required when you add and remove indexes on fields at multiple times on collection.

db.users.reIndex();

This operation drops all indexes for a collection, including the _id index, and then rebuilds all indexes.

4. Remove Too Many Index:

Add only required index on the collection because the index will consume the CPU for write operation.
If compound index exist then remove single index from collection fields.

5. Use limit in the result records:

If you know the record result limit, then always use the limit () for reducing the demand on network resources.

For example, You need only 10 users from the user’s collection then use query like below.

db.users.find().limit(10)

6. Use Projection to return only required Data:

When response requires only a subset of fields from documents, you can achieve better performance by returning only the fields you need.

You have a users collection and you only need the fields like fullName ,email and mobile and you would issue the following query.

db.users.find( {}, { fullName : 1 , email : 1 , mobile : 1} ).sort( { timestamp : -1 } )

Analyze Query Performance:

Hope you have applied all above techniques, now you have to check the performance of the query using the Mongodb command.
To analyze the query performance, we can check the query execution time, no. of records scanned and much more.

The explain() method returns a document with the query plan and, optionally, the execution statistics.

The explain() Method used the three different options for returning the execution information.

The possible options are: “queryPlanner”, “executionStats”, and “allPlansExecution” and queryPlanner is default.
You can check the difference after applying above option in explain() method.

For Example-

db.users.find({sEmail:’demo@test.com'}).explain()

db.users.find({sMobile:'9685741425'}).explain("executionStats")

Main point that we have to take care on above explanation:

  • queryPlanner.winningPlan.stage: displays COLLSCAN to indicate a collection scan. This is a generally expensive operation and can result in slow queries.
  • executionStats.nReturned displays 3 to indicate that the query matches and returns three documents.
  • executionStats.totalKeysExamined: displays 0 to indicate that this query is not using an index.
  • executionStats.totalDocsExamined: displays 10 to indicate that MongoDB had to scan ten documents (i.e. all documents in the collection) to find the three matching documents.
  • queryPlanner.winningPlan.inputStage.stage: displays IXSCAN to indicate index use.

The explain() method can be used in many ways like below.

db.orders.aggregate(
    [
        { $match: { status: "A" } },
        { $group: { _id: "$custId", total: { $sum: "$amount" } } },
        { $sort: { total: -1 } }
    ],
    {explain: true}
);

db.orders.explain("executionStats").aggregate(
    [
        {$match: {status: "A", amount: {$gt: 300}}}
    ]
);

db.orders.explain("allPlansExecution").aggregate(
    [
        {$match: {status: "A", amount: {$gt: 300}}}
    ]
);

Conclusion : –

Finally, Now that I have covered the very useful technique for query optimization, take the information provided and see how you can dramatically transform your query fast and efficient.

Please let me know if you have further performance tips.

Viewing all 595 articles
Browse latest View live