Quantcast
Channel: Yudiz Solutions Ltd.
Viewing all 595 articles
Browse latest View live

Building AR Portal : A door to fascinating AR experience

$
0
0

Overview

Ever wanted to visit an imaginary, fairy world? 😀
It’s now possible. Thanks to Augmented Reality. You just need to pick up your mobile, point it at an open space, plot an AR portal and dive into the magical world of your choice.

And what if I say, all this is possible with just 10 lines of code? Wouldn’t it be just a cherry on top?

Coding in AR is a little tricky, if we do not have any support of abstract SDKs. Here, we will use ViroCore, a scene kit for Android ARCore. It provides extreme level of abstraction. Coding with ViroCore is more fun and easier than ever, thanks to those wonderful people at ViroMedia (https://viromedia.com). And, most amazing part of ViroCore is: It’s free !!

Isn’t it jaw-dropping news? 😀

ar-portal-image1

Practical

Let’s dive into some practical scenarios. But before it, you’ll need an API key which can be grabbed by registering yourselves at https://viromedia.com/signup and it is to be added in manifest file as shown below:

ar-portal-image2

We have to add two libraries: ARCore and ViroCore SDK. As it is based on ARCore, it will only run on AR supported devices.

We’ll use the assets provided by ViroCore – sample code. In this, we will have a window of a ship and through that window, we can pass into a beach world, for that we’ll need a 360 degree image of a beach world and a model of ship window and its textures. Place them in assets folder.

ar-portal-image3

Here, .VRX format for model is ViroCore’s own extension. Here, it is basically a .FBX model.

Final output will be something similar to the below image:

ar-portal-image4

In Activity’s onCreate( ) method, we need to set ViroView as content view. ViroView is basically a FrameLayout which initializes the camera for AR.

@Override
protected void onCreate(Bundle savedInstanceState) {
   super.onCreate(savedInstanceState);

   mViroView = new ViroViewARCore(this, new ViroViewARCore.StartupListener() {
       @Override
       public void onSuccess() {
           displayScene();
       }

       @Override
       public void onFailure(ViroViewARCore.StartupError error, String errorMessage) {
           Log.e(TAG, "Error initializing AR [" + errorMessage + "]");
       }
   });

   setContentView(mViroView);
}

As seen, we have a function-call in onSuccess( ) of ViroView’s initialization.

private void displayScene() {
   mScene = new ARScene();
   mScene.setListener(new ARSceneListener(this, mViroView));

   //add light
   OmniLight light = new OmniLight();
   light.setColor(Color.WHITE);
   light.setPosition(new Vector(0, 1, -4));
   mScene.getRootNode().addLight(light);

   //load ship window model
   Object3D shipDoorModel = new Object3D();
   shipDoorModel.loadModel(Uri.parse("file:///android_asset/portal_ship.vrx"), Object3D.Type.FBX, null);

   //create a portal
   Portal portal = new Portal();
   portal.addChildNode(shipDoorModel);
   portal.setScale(new Vector(0.5, 0.5, 0.5));

   //add a beach world
   PortalScene portalScene = new PortalScene();
   portalScene.setPosition(new Vector(0, 0, -5));
   portalScene.setPassable(true);
   portalScene.setPortalEntrance(portal);
   Bitmap beachBackground = getBitmapFromAssets("beach.jpg");          //load from assets
   Texture beachTexture = new Texture(beachBackground, Texture.Format.RGBA8, true, false);
   portalScene.setBackgroundTexture(beachTexture);

   mScene.getRootNode().addChildNode(portalScene);

   //set scene in ViroView
   mViroView.setScene(mScene);

   View.inflate(this, R.layout.viro_initializing_ar, ((ViewGroup) mViroView));
}

private Bitmap getBitmapFromAssets(String filePath) {
   AssetManager assetManager = getAssets();

   InputStream istr;
   Bitmap bitmap = null;
   try {
       istr = assetManager.open(filePath);
       bitmap = BitmapFactory.decodeStream(istr);
   } catch (IOException e) {
       // handle exception
   }

   return bitmap;
}

This will add an ARScene in VieroView.
Initially, we have set light position and color.

OmniLight light = new OmniLight();
light.setColor(Color.WHITE);
light.setPosition(new Vector(0, 1, -4));
mScene.getRootNode().addLight(light);

Then, we have created and loaded ship window model.

Object3D shipDoorModel = new Object3D();
shipDoorModel.loadModel(Uri.parse("file:///android_asset/portal_ship.vrx"), Object3D.Type.FBX, null);

Now, comes the main part. We have created a portal and have added a background to it.

Portal portal = new Portal();
portal.addChildNode(shipDoorModel);
portal.setScale(new Vector(0.5, 0.5, 0.5));

PortalScene portalScene = new PortalScene();
portalScene.setPosition(new Vector(0, 0, -5));
portalScene.setPassable(true);
portalScene.setPortalEntrance(portal);
Bitmap beachBackground = getBitmapFromAssets("beach.jpg");          //load from assets
Texture beachTexture = new Texture(beachBackground, Texture.Format.RGBA8, true, false);
portalScene.setBackgroundTexture(beachTexture);

setPassable(true) will allow us to create a portal in which we can enter. We have to set ship model as the entrance of PortalScene.

That’s it. Now, it’s beach time !! 😀

Video

Future

We, here at Yudiz, are targeting to achieve nested portal scenarios (Portal inside another Portal).

Conclusion:

We are just a ‘portal’ away from AR-dominating world. Adapting to such amazing technologies as soon as possible will serve us and our businesses with a great benefits.


Room Database: An Architecture Component

$
0
0

What is Architecture Components?

With the popular demand, The Android Team has written an opinionated guide to architecture android applications and developed a set of architecture components.

Architecture Components are a growing set of libraries there will be more of them coming out and they are meant for creating an Android Application in a better way.

The whole point of these libraries is to simplify things that might have been a small challenge with android mobile application development.

List of Architecture Components:

Room:

The room is a robust SQL object mapping library.

Lifecycle Components:

Lifecycle components like Live Data, Lifecycle Observers, View Model and Lifecycle owners helps you to handle your app’s life cycle.

Paging Library:

Paging library is a small library to help you to load data from server gradually.

Here, We’ll focus on Room Database.

What is Room Database?

  • The room is persistence library that provides an abstraction layer over SQLite.
  • So basically by using Room database, your SQLite code becomes much easier by implementing annotations. So now you don’t need to write a lot of code to save or store your data locally.
  • Room database library reached 1.0 stable, so you can use this in an android application without any fear.

Why Room Database?

  • Working directly with SQLite database in Android has several drawbacks like,
  • You have to write a lot of boilerplate code.
  • You have to implement object mapping for every single query you wrote.
  • A database migration is too difficult in SQLite.
  • Hard to access database.
  • If you are not careful, you can easily do the long-running operation or task on the main thread.

Advantages of Room Database

  • We just have to write a Little boilerplate code.
  • All Entity and Query will be checked at compile time so no worries to check tables and queries that might crash the application.
  • Room not only provides query error It also takes care of missing table error.
  • You can fully integrate Room Database with other Architecture Components like Live Data.

Let’s get started with Room Database

Update the dependencies

Room Database libraries are available via Google’s maven repositories, So you need to add it to the list of repositories in a project level build.gradle file.

allprojects {
   repositories {
       google()
       jcenter()
   }
}

In your app/build.gradle file, you need to add dependencies for room database.

implementation       
"android.arch.persistence.room:runtime:1.0.0"

annotationProcessor
"android.arch.persistence.room:compiler:1.0.0"

Create Table with Room

@Entity(tableName = "tbl_employee")
public class Employee {
   @PrimaryKey(autoGenerate = true)
   private int id;

   @ColumnInfo(name = "name")
   private String name;

   @ColumnInfo(name = "mobile")
   private String mobile;
}

@Entity will define a table, write table name in ‘tableName’
(Ignore if your class name is same as your table name)

@PrimaryKey will define your column as a primary key. Add ‘autoGenerate’ as a true to auto-generate value in that particular column.

@ColumnInfo is used to define that field as a table column. Write the name of your table filed in ‘name’ (Ignore if the variable name and table field name are same)

Create DAO (Database Access Object)to handle all queries

Read query

@Dao
public interface EmployeeDao {
   @Query("SELECT * FROM tbl_employee")
   List getAllEmployees();
}

@Dao Contains all the methods which are used to access the database

@Query Here you can write all the Custom SQL Queries

Insert Query

@Insert
void insertEmployee(Employee employee);

@Insert annotation will automatically insert data in the table

Insert Query for list of data

@Insert
void insertEmployees(List employees);

@Insert If you want to add more than one employee than you can also do it by this annotation.(If you pass single record it will insert single If you pass the list of record it will insert all the records)

Update Query

@Update
void updateEmployee(Employee employee);

@Update annotation used to update any of the data stored in the table

Delete Query

@Delete 
void deleteEmployee(Employee employee);

@Delete annotation will delete the employee data

Generate Database class to handle tables and queries

@Database(entities = {Employee.class}, version = 1)
public abstract class EmployeeDB extends RoomDatabase {
   public abstract EmployeeDao employeeDao();
}

@Database annotation will create your database. In Entities, you have to specify all the entities(Tables) you have created.

Build database in your activity

database = 
Room.databaseBuilder(getApplicationContext(), 
EmployeeDB.class, 
DATABASE_NAME).allowMainThreadQueries().build();

Here you have to specify the name of Database and build the database after that you can access all Database query by object.

Retrieve all data employee

List employeeList = 
database.employeeDao().getAllEmployees();

By this, you can get all the employee detail as a list.

Insert Employee Data

database.employeeDao().insertEmployee(employee);

Here you have to specify employee details to insert into a table.

Update Employee Data

database.employeeDao().updateEmployee(employee);

You can update the employee details based on unique ID.

Delete Employee Data

database.employeeDao().deleteEmployee(employee);

Delete employee data from table based on unique ID.

room-database-image1

room-database-image2

Simple Animations Using Facebook’s Pop Framework

$
0
0

Overview

The Facebook’s open source animation framework named Pop was first released in late April 2014, that powers the animations in the Paper for iOS app. Pop Framework becomes easy to use if you have worked on Apple’s Core Animation before. Even if you have not, writing a basic animation using Pop is very easy.

Once you spend time on iOS animation, You will face lots of crazy things and too many LOCs to perform even a small task, but here Facebook’s pop animation framework will help you to overcome that.

Pop is an extensible animation engine for both iOS and OS X. It supports basic, spring, decay and custom animations.

There are a couple of things to adore about this library:

  1. It’s easy to use and fun
  2. It plays well with auto layout by allowing you to interact at the layer level
  3. You can animate constraints when needed
  4. It supports spring and dynamic animations

I’m expecting that you know about making new XCode project, so I’m skipping that part.

Installing CocoaPods

Install CocoaPods on System.CocoaPods Guides.

OR

If you have already added CocoaPods to your project then you’ll just need to add the following line to your Podfile:

pod 'pop'

Basic Animation

linear animation

ease-in

ease-out

easein-out

Basic animations make linear, ease-in, ease-out, and ease-in-ease-out animations simple to leverage.

In given example, I’m simply rotating the view. POPBasicAnimation class is used for Basic Animation. POPBasicAnimation has several different properties, but I have used toValue, fromValue, duration to define the behavior of the view.

Steps:

  1. Import framework in your UIViewController class: import pop
  2. Create the outlet variables for UIView.
    @IBOutlet weak var animView: UIView!

  3. Create POPBasicAnimation object with using Layer Properties.
    let anim = POPBasicAnimation(propertyNamed: kPOPLayerRotation)
  4. Set the toValue, fromValue, duration of the animation.
    anim?.fromValue = 0
    anim?.toValue = Double.pi
    anim?.duration = 0.5
  5. Add animation to View or Layer.
    animView.layer.pop_add(anim, forKey: "basicAnimation")

    NOTE: You can set any string as key
  6. Implement button action method.
    func prepareForBasicAnimation() {
         		let anim = POPBasicAnimation(propertyNamed: kPOPLayerRotation)
          	    anim?.fromValue = 0
            	anim?.toValue = Double.pi
            	anim?.duration = 0.5
            	animView.layer.pop_add(anim, forKey: "basicAnimation")
        }

    @IBAction func btnAnimationTap(_ sender: UIButton) {
            	prepareForBasicAnimation()
        	}

basic-animation

Spring Animation

spring

Here in the demo app, I have given the Bouncing effect to Rounded View when it moves up and down. There is one UISlider to set springBounciness of UIView. POPSpringAnimation class is used for Spring Animation. POPSpringAnimation has properties like springBounciness, springSpeed to define the behavior of spring. We can apply constraints to the object we wish to move. We can modify the value of kPOPLayoutConstraintConstant to animate our interface object.

Steps:

  1. Import framework in your UIViewController class: import pop
  2. Create the outlet for ballView, slider and vertical center constraint constant of UIView. Your code should look like this:
    @IBOutlet weak var ballView: UIView!
    @IBOutlet weak var slider: UISlider!
    @IBOutlet weak var ballCenterYConstraint: NSLayoutConstraint!

    spring-animation-step2
  3. Here is Bool for to know Up/Down status
    var atTop: Bool = false
  4. In the viewDidLoad method, add the following line of code to set UISlider minimum and maximum value.
    slider.minimumValue = 8.0
    slider.maximumValue = 20.0
  5. Create POPSpringAnimation object by using NSLayoutConstraint Property.
    let spring = POPSpringAnimation(propertyNamed: kPOPLayoutConstraintConstant)
  6. Set the toValue, springBounciness, springSpeed of animation.
    spring?.toValue = 100
    spring?.springBounciness = 8.0
    spring?.springSpeed = 8
  7. Add animation to UIView vertical center constraint
    ballCenterYConstraint.pop_add(spring, forKey: "moveUp")
  8. Same code for MoveDown, just change value to -100.
    func animateBottom() {
    let spring = POPSpringAnimation(propertyNamed: kPOPLayoutConstraintConstant)
            	spring?.toValue = -100
            	spring?.springBounciness = bounciness
            	spring?.springSpeed = 8
            	ballCenterYConstraint.pop_add(spring, forKey: "moveDown")
        	}
  9. Implement button action method.
    func animateTop() {
            let spring = POPSpringAnimation(propertyNamed: kPOPLayoutConstraintConstant)
            spring?.toValue = 100
            spring?.springBounciness = bounciness
            spring?.springSpeed = 8
            ballCenterYConstraint.pop_add(spring, forKey: "moveUp")
        }

    func animateBottom() {
            let spring = POPSpringAnimation(propertyNamed: kPOPLayoutConstraintConstant)
            spring?.toValue = -100
            spring?.springBounciness = bounciness
            spring?.springSpeed = 8
            ballCenterYConstraint.pop_add(spring, forKey: "moveDown")
        }

    @IBAction func btnAnimationTap(_ sender: UIButton) {
            if atTop {
                animateBottom()
            } else {
                animateTop()
            }
            atTop = !atTop
        }

spring-animation

Decay Animation

decay

Decay makes a movement to an eventual slow end. Decay use velocity as an input. In the demo app, I have given slow end effect to Rounded View. POPDecayAnimation class is used for Decay Animation. POPDecayAnimation has a property called velocity.

Steps:

  1. Import framework in your UIViewController class: import pop.
  2. Create the outlet variables for UIView and UISlider, as well as, Vertical Center Constraint Constant of UIView. Your outlet variable should look like this:
    @IBOutlet weak var ballView: UIView!
    @IBOutlet weak var ballCenterYConstraint: NSLayoutConstraint!

    decay-animation-step2
  3. Create POPDecayAnimation object with using NSLayoutConstraint Property.
    let spring = POPDecayAnimation(propertyNamed: kPOPLayoutConstraintConstant)
  4. Set the velocity of animation.
    spring?.velocity = NSValue(cgPoint: CGPoint(x: -642.0, y: 0))
  5. Add animation to UIView vertical center constraint.
    ballCenterYConstraint.pop_add(spring, forKey: "move")
  6. Implement button action method.
    func prepareForDecayAnimation() {
            let spring = POPDecayAnimation(propertyNamed: kPOPLayoutConstraintConstant)
            spring?.velocity = NSValue(cgPoint: CGPoint(x: -642.0, y: 0))
            rightBallCenterY.pop_add(spring, forKey: "move")
    }

    @IBAction func btnAnimationTap(_ sender: UIButton) {
            rightBallCenterY.constant = 160 //Reset your View Position Y 
            prepareForDecayAnimation()
    }

decay-animation

Delegation Handling

POP comes with a couple of delegate methods that alert you to particular events. We can take advantage of these when we are stacking animations in order to get the feeling you are looking for.

When one of these methods are called you can check the animation name to verify the animation in the one you need. For example:

let sprintAnimation = POPSpringAnimation(propertyNamed: kPOPViewScaleXY)
sprintAnimation?.velocity = NSValue(cgPoint: CGPoint(x: 8.0, y: 8.0))
sprintAnimation?.springBounciness = 20.0
sprintAnimation?.name = "send"
sprintAnimation?.delegate = self
btnSend.pop_add(sprintAnimation, forKey: "sendAnimation")

Some POPAnimationDelegate are:

// Called on animation start.
    func pop_animationDidStart(_ anim: POPAnimation!) {
        if anim.name == "send" {
            // perform a new animation or action
        }
    }

    // Called when value meets or exceeds to value.
    func pop_animationDidReach(toValue anim: POPAnimation!) {
        if anim.name == "send" {
            // perform a new animation or action
        }
    }

    // Called on animation stop.
    func pop_animationDidStop(_ anim: POPAnimation!, finished: Bool) {
        if anim.name == "send" {
            // perform a new animation or action
        }
    }

    // Called each frame animation is applied
    func pop_animationDidApply(_ anim: POPAnimation!) {
        if anim.name == "send" {
            // perform a new animation or action
        }
    }

#Example Highlighted UITableViewCell

Go to your UITableViewCell class and override the setHighlighted method using the given code snippet:

override func setHighlighted(_ highlighted: Bool, animated: Bool) {
        super.setHighlighted(highlighted, animated: animated)
        if highlighted {
            let scaleAnimation = POPBasicAnimation(propertyNamed: kPOPViewScaleXY)
            scaleAnimation?.duration = 0.1// defaults to 0.4
            scaleAnimation?.toValue = NSValue(cgPoint: CGPoint(x: 1.0, y: 1.0))
            self.lblTitle.pop_add(scaleAnimation, forKey: "scaleAnimation")
        } else {
            let springAnimation = POPSpringAnimation(propertyNamed: kPOPViewScaleXY)
            springAnimation?.toValue = NSValue(cgPoint: CGPoint(x: 0.9, y: 0.9))
            springAnimation?.velocity = NSValue(cgPoint: CGPoint(x: 2.0, y: 2.0))
            springAnimation?.springBounciness = 20.0//from 1 to 20
            self.lblTitle.pop_add(springAnimation, forKey: "springAnimation")
        }
    }

highlighted-cell

Properties To Know

  • toValue: id // value type should match the property
  • fromValue: id // value type should match the property
  • velocity: id
  • springBounciness: CGFloat // from 1 to 20
  • springSpeed: CGFloat // from 1 to 20
  • repeatForever: Bool // a convenient way to loop an animation
  • duration: CFTimeInterval // defaults to 0.4
  • beginTime: CFTimeInterval // if you want to delay the beginning of an animation
  • name: NSString // identify the animation when delegate methods are called
  • autoreverses: Bool // this will complete one full animation cycle; use repeateForever to loop the effect

Note: There are a number of predefined animation properties in the POPAnimatableProperty.h file that will help you make your animation.

Pagination Data Scraping in Android using Jsoup(Java HTML Parser)

$
0
0

Overview

Jsoup iterate all elements of HTML illustration and demonstrates to choose & repeat all elements of HTML document utilizing Jsoup.
Jsoup provides select technique which acknowledges CSS style selectors to choose the elements.
Click here for start with basic of Data Scraping in Android using JSOUP
Now we will be scraping all the data from the pagination of the blogger page of Yudiz. And we will display it in RecyclerView.

Steps :-

  1. First of all we need to find the total number of the pages available in blog page.
  2. We store all the page url in an ArrayList.
  3. Now we will connect with each and every url and get all needed data from it.

Step 1 : HTML Source Code

We will use http://www.yudiz.com/blog/ for a data scraping of this webpage.

pagination-image1

Total Number of pages HTML Code:-

<div class="pages">
<a href="http://www.yudiz.com/blog/" class="page active">1</a>
<a href="http://www.yudiz.com/blog/page/2/" class="page">2</a>
<a href="http://www.yudiz.com/blog/page/3/" class="page">3</a>
<a href="http://www.yudiz.com/blog/page/4/" class="page">4</a>
<a href="http://www.yudiz.com/blog/page/5/" class="page">5</a>
<a href="http://www.yudiz.com/blog/page/6/" class="page">6</a>
<a href="http://www.yudiz.com/blog/page/7/" class="page">7</a>
<a href="http://www.yudiz.com/blog/page/8/" class="page">8</a>
</div>

Author Name HTML Code:-

<span class="vcard author post-author test">
<a href="http://www.yudiz.com/author/sandeep-joshi/">
Sandeep Joshi
</a>
</span>

Blog Upload Date HTML Code:-

<span class="post-date updated">November 24, 2017</span>

Blog Title HTML Code:-

<div class="post-title">
<h2 class="entry-title" itemprop="headline">
<a href="http://www.yudiz.com/how-to-customize-your-app-icon/">
How to customize your app icon?
</a>
</h2>
</div>

Note:- For Scraping you must have to find the unique HTML element tag for necessary field otherwise you should have to find some other HTML element if the same HTML element is used for other purposes.

pagination-image2

pagination-image3

Step 2 : Android Source Code

Permissions to be needed in Manifest.xml :-

<uses-permission android:name="android.permission.INTERNET" />

Gradle Dependencies to be add :-

dependencies {
   implementation 'org.jsoup:jsoup:1.11.2'
}

activity_main.xml

<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
   xmlns:app="http://schemas.android.com/apk/res-auto"
   xmlns:tools="http://schemas.android.com/tools"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   tools:context="com.jsoupdemo.MainActivity">

   <android.support.v7.widget.RecyclerView
       android:id="@+id/act_recyclerview"
       android:layout_width="match_parent"
       android:layout_height="match_parent">

   </android.support.v7.widget.RecyclerView>

</android.support.constraint.ConstraintLayout>

row_data.xml

<?xml version="1.0" encoding="utf-8"?>
<android.support.v7.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:layout_margin="5dp">

   <LinearLayout
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:orientation="vertical">

       <TextView
           android:id="@+id/row_tv_blog_title"
           android:layout_width="match_parent"
           android:layout_height="wrap_content"
           android:layout_margin="5dp"
           android:textStyle="bold" />

       <TextView
           android:id="@+id/row_tv_blog_author"
           android:layout_width="match_parent"
           android:layout_height="wrap_content"
           android:layout_margin="5dp" />

       <TextView
           android:id="@+id/row_tv_blog_upload_date"
           android:layout_width="match_parent"
           android:layout_height="wrap_content"
           android:layout_margin="5dp" />
   </LinearLayout>
</android.support.v7.widget.CardView>

MainActivity.java

public class MainActivity extends AppCompatActivity {

   private ProgressDialog mProgressDialog;
   private String url = "http://www.yudiz.com/blog/";
   private ArrayList<String> mAuthorNameList = new ArrayList<>();
   private ArrayList<String> mBlogUploadDateList = new ArrayList<>();
   private ArrayList<String> mPaginationList = new ArrayList<>();
   private ArrayList<String> mBlogTitleList = new ArrayList<>();

   @Override
   protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       setContentView(R.layout.activity_main);

       new Description().execute();

   }

   private class Description extends AsyncTask<Void, Void, Void> {
       @Override
       protected void onPreExecute() {
           super.onPreExecute();
           mProgressDialog = new ProgressDialog(MainActivity.this);
           mProgressDialog.setTitle("Android Basic JSoup Tutorial");
           mProgressDialog.setMessage("Loading...");
           mProgressDialog.setIndeterminate(false);
           mProgressDialog.show();
       }

       @Override
       protected Void doInBackground(Void... params) {
           try {
               // Connect to the web site
               Document mBlogDocument = Jsoup.connect(url).get();

               int mPaginationSize = mBlogDocument.select("div[class=pages]").select("a").size();

               for (int page = 0; page < mPaginationSize; page++) {

                   Elements mPageLinkTaga = mBlogDocument.select("div.pages a").eq(page);
                   String mPageLink = mPageLinkTaga.attr("href");

                   mPaginationList.add(mPageLink);
                   Log.i("TAG1", mPageLink);
               }

               for (int j = 0; j < mPaginationList.size(); j++) {
                   Document mBlogPagination = Jsoup.connect(mPaginationList.get(j)).get();

                   // Using Elements to get the Meta data
                   Elements mElementDataSize = mBlogPagination.select("div[class=author-date]");
                   // Locate the content attribute
                   int mElementSize = mElementDataSize.size();

                   for (int i = 0; i < mElementSize; i++) {
                       Elements mElementAuthorName = mBlogPagination.select("span[class=vcard author post-author test]").select("a").eq(i);
                       String mAuthorName = mElementAuthorName.text().trim().replace("\n", "").replace("\t", "").replace("\r", "").replace("\b", "");

                       Elements mElementBlogUploadDate = mBlogPagination.select("span[class=post-date updated]").eq(i);
                       String mBlogUploadDate = mElementBlogUploadDate.text();

                       Elements mElementBlogTitle = mBlogPagination.select("h2[class=entry-title]").select("a").eq(i);
                       String mBlogTitle = mElementBlogTitle.text();

                       mAuthorNameList.add(mAuthorName);
                       mBlogUploadDateList.add(mBlogUploadDate);
                       mBlogTitleList.add(mBlogTitle);
                   }
               }
           } catch (IOException e) {
               e.printStackTrace();
           }
           return null;
       }

       @Override
       protected void onPostExecute(Void result) {
           // Set description into TextView

           RecyclerView mRecyclerView = (RecyclerView) findViewById(R.id.act_recyclerview);

           DataAdapter mDataAdapter = new DataAdapter(MainActivity.this, mBlogTitleList, mAuthorNameList, mBlogUploadDateList);
           RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(getApplicationContext());
           mRecyclerView.setLayoutManager(mLayoutManager);
           mRecyclerView.setAdapter(mDataAdapter);

           mProgressDialog.dismiss();
       }
   }
}

DataAdapter.java

public class DataAdapter extends RecyclerView.Adapter<DataAdapter.MyViewHolder> {

   private ArrayList<String> mBlogTitleList = new ArrayList<>();
   private ArrayList<String> mAuthorNameList = new ArrayList<>();
   private ArrayList<String> mBlogUploadDateList = new ArrayList<>();
   private Activity mActivity;
   private int lastPosition = -1;

   public DataAdapter(MainActivity activity, ArrayList<String> mBlogTitleList, ArrayList<String> mAuthorNameList, ArrayList<String> mBlogUploadDateList) {
       this.mActivity = activity;
       this.mBlogTitleList = mBlogTitleList;
       this.mAuthorNameList = mAuthorNameList;
       this.mBlogUploadDateList = mBlogUploadDateList;
   }

   public class MyViewHolder extends RecyclerView.ViewHolder {

       private TextView tv_blog_title, tv_blog_author, tv_blog_upload_date;

       public MyViewHolder(View view) {
           super(view);
           tv_blog_title = (TextView) view.findViewById(R.id.row_tv_blog_title);
           tv_blog_author = (TextView) view.findViewById(R.id.row_tv_blog_author);
           tv_blog_upload_date = (TextView) view.findViewById(R.id.row_tv_blog_upload_date);
       }
   }

   @Override
   public MyViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
       View itemView = LayoutInflater.from(parent.getContext())
               .inflate(R.layout.row_data, parent, false);

       return new MyViewHolder(itemView);
   }

   @Override
   public void onBindViewHolder(MyViewHolder holder, final int position) {
       holder.tv_blog_title.setText(mBlogTitleList.get(position));
       holder.tv_blog_author.setText(mAuthorNameList.get(position));
       holder.tv_blog_upload_date.setText(mBlogUploadDateList.get(position));
   }

   @Override
   public int getItemCount() {
       return mBlogTitleList.size();
   }
}

Step 3 : Test

pagination-image4

Laravel Custom Commands

$
0
0

Overview

Uses :

Custom Commands is used when we need to check something or track something on particular time period, for example : we need to send mail to notification mail to user when his subscription period is near to expire or we need to send feedback form to new user after one month to know their experience, etc.

So in this case we will create custom command and we set cron job that execute our command.

Now the question is how to make custom command?

Laravel Initial SetUp

Let’s start with a new fresh project.
*For Understanding Purpose we make it simple by using only two tables “users and posts”

  • Create new project
    laravel new custom
  • Set up SQL connection in .env file
  • Make new migration table : posts
    php artisan make:migration create_post_table create=posts
  • Add few columns like title, description, user_id(foreign constraint)
  • Migrate database
    php artisan migrate
  • Make Eloquent Model For User(if not exists) and Posts

Set Up Relationship (*optional)

1) Define relationships and add accessor in Post Eloquent Mode. (*optional)

public function user()
{
    return $this->belongsTo('App\User');
}

public function getCreatedAtAttribute($value)
{
    return \Carbon\Carbon::parse($value)->diffForHumans();
}

Same as Post Eloquent, add relationship and accessor in User Eloquent Model

public function posts()
{
    return $this->hasMany('App\Post');
}

public function getNameAttribute($value)
{
    return title_case($value);
}

2) Add few records in a database. (manually from phpMyAdmin)
3) As of now, the Basic app is to understand the custom commands.

Simple Commands

Generate Command File

php artisan make:command {command_name}
php artisan make:command allUsers

Structure Of Command

  • After generating your command, you should fill in the signature and description properties of the class, which will be used when displaying your command on the list screen.
  • The handle method will be called when your command is executed.
  • You may place your command logic in this method.

Here one new command file is created : App\Console\Commands\allUsers.php

Now Modify our new command file:

protected $signature = 'command:name';

Signature Means : The command used after php artisan

So change signature variable to :

protected $signature = 'user:all';

Change the description :

protected $description = 'Get All Users.';

This message is displayed in the help section of our command.

Now, an actual part of the command is here:

We need to add our logic part at

public function handle()
{
    // Add Your Logic / Task 
}

In this example we need to select user details, so add :

$users = User::select('id','name','email','contact')
    ->orderBy('name')
    ->get()
    ->toArray();

$this->table(['id','name','email','contact'],$users);

NOTE :

  • If You Want to display details in tabular format, “toArray()” is required.
  • In the table() we need to pass the column name(s) and array for that columns.
  • Artisan CLI can manage the output as a table for GUI purpose (if needed).

Output :
Now, our command is ready to execute.
Go to terminal and execute the command

php artisan user:all

laravel-image1

Print Details On Command Line :

$this->info("message");	          // success message

$this->error("message")          // error message

Get Input In Through Command Line

As of now we know how to fire custom command, now understand how to get input from command line and process that input.

Let’s make new command which fetches particular user’s details.

php artisan make:command userDetails

Change signature of our command :

protected $signature = 'user:get { id : ID Of User}';

Note :

  • Here, {variable : description} is used to get input from user.
  • As per our signature command will look like
    php artisan user:get 1
  • Here we can also pass optional values by adding {id?} so “?” is used for optional values.
  • If you want to add some options then use {–option_name}, if you want to add shortcut then {– A | argument } and you can access that value by using switch case
  • If you have not fixed a length of output then you can use an array for it by adding *.
    Example :
    • COMMAND FILE : $signature = ‘user:get {– id =*}’;
    • TERMINAL : php artisan user:get –id=1 –id=2

Now add our logic part / task at

public function handle()
{
	// Add Your Logic / Task 
}

To use input of CLI :

$this->argument("argument_name");

In this example we need to select user details, post details, so add :

$bar = $this->output->createProgressBar(100);
$user_details = User::select('id','name','email','contact')
          ->where('id', $this->argument('id'))->get()->toArray();

if(sizeof($user_details) == 0) {
    $this->error("User Not Found!");
        $bar->advance();
    }
    else {
        $this->info("User Details");
        $this->table(['id','name','email','contact'],$user_details);
        $post_details = Post::select('title','description','created_at')
	          ->where('user_id',$this->argument('id'))->get()->toArray();
        $bar->advance(50);

        if(sizeof($post_details) == 0) {
            $this->error("No Post Found!");
        } else {
            $this->info("\n\nUser's Posts Details");
            $this->table(['Title','Description','Created By'],$post_details);
            $this->info("\nTotal Post(s) : ".sizeof($post_details));
            $bar->advance(50);
        }
    }
$bar->finish();
$this->info(''); // this is for new line

In above example we add ProgressBar, ProgressBar is used to get tracking information and nothing else.

  • First, we need to init ProgressBar with max.
  • After that just use advance() with an argument, the number of increment default is 1.
  • To complete the use of ProgressBar, use finish() to end that ProgressBar.
  • ProgressBar Is Completely Optional. It is only for GUI Purpose Only!

Output :
Now, our command is ready to execute.
Go to terminal and execute the command

php artisan user:get 1

laravel-image2

php artisan user:all

php artisan post:all

laravel-image3

Conclusion : –

That’s It! I hope you found out my article useful. For any suggestions, please hit comment below.

Thank You!

iOS 12 – The next step

$
0
0

Overview

Did you know that 20 million people are building apps for Apple devices?

ios12-image1

The next step
Performance. Stability. Features.

Privacy

ios12-image2

As with all Apple software updates, enhanced privacy and security remain a top priority in iOS 12. In Safari, enhanced Intelligent Tracking Prevention helps block social media “Like” or “Share” buttons and comment widgets from tracking users without permission. Safari now also presents simplified system information when users browse the web, preventing them from being tracked based on their system configuration. Safari now also automatically creates, autofills and stores strong passwords when users create new online accounts and flags reused passwords so users can change them.

Reliability

ios12-image3

  • Apple’s looking to put the problematic iOS 11 behind it
  • Older iPhones will be 40% to 70% faster at certain tasks

iOS 12 will focus on reliability and performance with this update, and support all of the same iPhones and iPads that iOS 11 worked with.

Features

Memoji

Not only can Animoji (and Memoji) recognize when you’re sticking your tongue out in iOS 12, as mentioned in the keynote, Apple says they’re better at recognizing winks as well—so your cartoon interactions will soon be able to go to a whole new level of suggestiveness.

ios12-image4

ios12-image5

Recorded Animoji and Memoji messages can now stretch to 30 seconds too. This is all still limited to the iPhone X and the extra tech it has packed into the front-facing camera.

Screen Time

Not only will your iOS device begin telling you how much time you spend (or waste) on your phone or tablet, but it will give you tools to help you tame your desire to be always connected.

As a reminder, iOS 12 is currently available in beta. It’s likely that features will change and look different by the time it’s released this fall.

Group FaceTime

FaceTime changed the way we communicate and share important moments, and now with Group FaceTime, it’s easy to chat with multiple people at the same time.

Users can video chat with 32 people simultaneously. Also, the Facetime app will automatically detect and highlight the person who is talking in the group of 32 people.

Participants can be added at any time, join later if the conversation is still active and choose to join using video or audio from an iPhone, iPad or Mac or even participate using FaceTime audio from Apple Watch.

You can add Animoji to everyone in the call.

ios12-image7

Do Not Disturb

ios12-image8

Do Not Disturb will also get a timing option, so you can set it for when you want it.
It is getting a bigger job – so if you sleep with DND on, or if you check the time at night, you won’t see a million notifications.

Grouped Notification

ios12-image9

The newest iOS version automatically organizes contents from the same app in separate groups.

iOS 12 offers a much better way to deal with alerts. Apart from grouping notifications, the operating system also lets you hide alerts during your bedtime. If you wish to have complete peace at night, the Bedtime Mode is the perfect solution for keeping distraction at bay.

When you set notification grouping by an app, all the alerts from a particular app will be stacked. Then, you can head into a stack to glance through all the latest notifications. Swim across to find how it’s done!

Updates

Augmented Reality

ios12-image10

ARKit 2 enables developers to create the most innovative AR apps for the world’s largest AR platform, with new tools to integrate shared experiences, persistent AR experiences tied to a specific location, object detection and image tracking, making AR apps even more dynamic.

Core ML

ios12-image11

Apple’s Core ML 2 is 30% faster, cuts AI model sizes by up to 75%.
Apple introduced Core ML in June 2017 with the launch of iOS 11.

Apple Senior Software Engineer Federighi explained that it used to take one developer, Memrise, 24 hours to train a model with 20,000 images, but that Create ML reduced the training time for same model to 48 minutes.

Core ML is expected to play a key role in Apple’s future hardware products. The company is reportedly developing a chip — the Apple Neural Engine, or ANE — to accelerate computer vision, speech recognition, facial recognition, and other forms of artificial intelligence.

Photos

ios12-image12

A new sharing suggestions feature makes it easier to share photos with friends, and friends who receive photos are prompted to share back any photos and videos they have from the same trip or event. Search suggestions surface the most relevant Events, People, Places, Groups, Categories and recent searches, and new search functionality lets users combine multiple search terms to find just the right photos.

Smart Voice Assistant (Siri)

ios12-image13

Apple has upgraded Siri to allow it to control third – Party apps without users actually opening them. Users can also assign their own voice commands to trigger different actions.

Siri Shortcuts deliver a new, much faster way to get things done with the ability for any app to work with Siri. Siri intelligence can suggest an action at just the right time.

Whether it’s to order a coffee in the morning or start an afternoon workout. Users can customize Shortcuts by creating a simple voice command to kick off the task or download the new Shortcuts app to create a series of actions from different apps that can be carried out with a simple tap or customized voice command.

Conclusion

iOS 12 is a major overhaul to the iOS operating system that introduces tantalizing new features like Group FaceTime, local multiplayer shared AR experiences, new Animoji, and a Memoji feature that’s designed to let you create a personalized Animoji that looks just like you.

Animoji, stickers, text, and more can be used in FaceTime and the Messages app, and there’s a new Screen Time feature to help customers understand and manage the amount of time they’re spending on their iOS devices.

Siri has been improved in iOS 12 with Siri Shortcuts, which is designed to allow Siri to work with any app.

Under-the-hood improvements to iOS 12 will make everyday tasks on the iPhone and iPad faster and more responsive, with the camera launching up to 70 percent faster and the keyboard showing up up to 50 percent faster.

ios12-image14

Only registered developers are able to download the iOS 12 beta at this time. As Apple has done in the past, a public beta for public beta testers will be provided later this summer after the software has gone through a couple rounds of developer testing.

iOS 12 will be available as a beta for several months as Apple works out all of the kinks and bugs. The update will see a public launch in the fall alongside new iPhones. iOS 12 will be available on all devices able to run iOS 11.

Adding New Fonts in React JS

$
0
0

Overview

We are going to explore how to add new fonts in React js project and for that we should know how to add styles using style.css file. Adding new fonts is quite similar to adding new font in CSS but there are some minor differences which we are going to explain.

From these concept the react is able to add new fonts with same facilities which is provided by styles. Let’s understand how it works and what is the scenario behind it.

Creating A React Project

First we should have one React JS Project ,if we don’t have then create new one:

Example-

npx create-react-app my-react-app
cd my-react-app
npm start

note-(npx comes with npm 5.2+ and higher.so in this app we used NPM 5.6.0.
It will generate a react app, name will be my-react-app.And here is the folder structure as follows:

my-react-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   └── favicon.ico
│   └── index.html
│   └── manifest.json
└── src
    └── App.css
    └── App.js
    └── App.test.js
    └── index.css
    └── index.js
    └── logo.svg
    └── registerServiceWorker.js

Analyse the changes to be

Now we can see that Public folder,so here we are going to add all our new files related to that new fonts with different extensions or type. And we should have all those files in one common folder (like fonts folder we can create and put all the fonts file inside this fonts folder)

Example-

my-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   └── fonts(folder which contains all about new fonts)

Now we need to add a style.css file inside our app(Folder)>style.css, which will contain following lines of code and here we are taking example code like how we can add New font (Miriam) in our app .

my-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   └── favicon.ico
│   └── index.html
│   └── manifest.json
└── src
    └── App.css
    └── style.css(file adding font)
    └── App.js
    └── App.test.js
    └── index.css
    └── index.js

Inside style.css we can add our new fonts as :

@font-face {
   font-family: 'Miriam';
   font-style: normal;
   font-weight: 400;
   src: url('../../../public/fonts/miriam-libre-v2-latin-regular.eot');
   src: url('../../../public/fonts/Miriam.eot?#iefix') format('embedded-opentype'),
       url('../../../public/fonts/miriam-libre-v2-latin-regular.eot') format('woff2'),
       url('../../../public/fonts/miriam-libre-v2-latin-regular.woff') format('woff'),
       url('../../../public/fonts/miriam-libre-v2-latin-regular.ttf') format('truetype'),
       url('../../../public/fonts/miriam-libre-v2-latin-regular.svg') format('svg');
}

Here,we can see how we are going to add ‘Miriam’ font to our project and inside this file we can change font-style and font-weight also. So for that we need to have basic idea about the files we have in fonts folder(all fonts files). So in fonts folder we have different types of files with their different types of extensions which will support to apply our font on every types of environment and for adding those files for our app we need to give relative path for them also in our style.css file.

Note– Here we can see we have some different types of files with different types of extensions too. So all of these things we will explain once we will define the loader of these files.

Process to add new Font

There are following steps to add new fonts:

Step-1=>

run npm install –save-dev url-loader inside our app to load files related to new font. The url-loader works like the file-loader, but can return a DataURL if the file is smaller than a byte limit.

Step-2=>

Now inside our Webpack.dev.js we need to add following lines of codes:

Inside module.exports>loaders(Array) we need to add our loader to load files for that new font:

Loaders:[
     {
       test: /\.(gif|eot|woff|woff2|ttf|svg)$/,
       loaders: [
         'url-loader'
       ]
     }
]

Description of Loaded Files

This url-loader will load different types of files to add new font about which we are as follows:

Embedded Open Type (EOT) files-
EOT fonts are designed by Microsoft.it is a compact form of Open Type of Fonts. It is designed to use Embedded fonts on Web.

Web Open Font Format (WOFF) files-
WOFF fonts basically support all types of browsers.
Web Open Font Format (WOFF2) files-
WOFF2 is the next generation of WOFF.

TrueType Font (TTF) files-
TrueType Fonts format was developed by Apple and Microsoft as a response to the PostScript font format.

Scalable Vector Graphics font (SVG) files-
SVG fonts are defined using the SVG’s ‘font’ element. In iPhone and iPad we should use SVG fonts.

Conclusion : –

So we have done all about adding new font in our application now we can use our new font(new added) as :
fontFamily:”Miriam”.

NOTE:
So it was for our entire app but if we want to add new font for a single component then we have to do all the things we have done except that style.css file . Now in this case we will add that style.css file for that component only not for entire app. And rest of steps will be same and we can use that CSS as:
fontFamily:”Miriam”.

Facebook Shimmer Animation – Swift 4

$
0
0

Overview

Hello Everyone, Hope you doing well and Having a nice day. In this article, we are going to focus on most popular and trending Facebook Shimmer Animation for data loading in UITableView, UICollectionView or in any UIView. Nowadays, many applications are using this type of animation with their respective design and look. Personally, I like latest update of Linkedin app, the way they are showing animation of loading graph with line arrow.

If you are an iOS user, then you are familiar with fancy slide to unlock animation.

fb-animation-image1

So I decided to give it a shot. I will help you implement shimmer animation for your app without using any third party frameworks by keeping it simple, easy and native.

Following GIF will show the demonstration of this article.

Performance. Stability. Features.

fb-animation-image2

Design

Shimmer is a very easy way to add shimmering effect. I’ved used UITableViewCell.

Placeholder Cell
fb-animation-image3

Coding

Before writing code, I thought it would be better to have my own property (in UIView using @IBInspectable) to easily enable animation from storyboard

@IBInspectable var shimmerAnimation: Bool {
        get {
            return isAnimate
        }
        set {
            self.isAnimate = newValue
        }
    }

We can not access IBInspectable value at runtime, but with the help of associatedObject its possible, it will return boolean value using compute variable isAnimate.

fileprivate var isAnimate: Bool {
        get {
            return objc_getAssociatedObject(self, &associateObjectValue) as? Bool ?? false
        }
        set {
            return objc_setAssociatedObject(self, &associateObjectValue, 
newValue, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
        }
    }

Now its very simple, open the storyboard and select the targeted UIView which need to be animated. Go to attributes inspector you will find our property ‘Shimmer Animation’. By default it is off, we need to turn it on.

fb-animation-image4

So far so good. Now, we require a recursive function which will return all views including its subviews.

func subviewsRecursive() -> [UIView] {
        return subviews + subviews.flatMap { $0.subviewsRecursive() }
    }

Here we don’t want all views but only the view with ‘shimmer animation’ true.

func getSubViewsForAnimate() -> [UIView] {
        var obj: [UIView] = []
        for objView in view.subviewsRecursive() {
            obj.append(objView)
        }
        return obj.filter({ (obj) -> Bool in
            obj.shimmerAnimation
        })
    }

That’s it. We can start animation using the startAnimation function. In below method, we are going to mask the UIView using the CAGradientLayer also animate with CABasicAnimation.

We need to set start and end point of gradient effect to object of gradientLayer with colors and frame according to animateView. At last, mask the animateView layer with gradientLayer.

For the CABasicAnimation it requires keyPath string so be specific with the key. It plays an important role for the animation. Thus, pass the duration of animation alone with from and to value. At last, using gradientLayer add animation object.

func startAnimation() {
        for animateView in getSubViewsForAnimate() {
            animateView.clipsToBounds = true
            let gradientLayer = CAGradientLayer()
            gradientLayer.colors = [UIColor.clear.cgColor, UIColor.white.withAlphaComponent(0.8).cgColor, UIColor.clear.cgColor]
            gradientLayer.startPoint = CGPoint(x: 0.7, y: 1.0)
            gradientLayer.endPoint = CGPoint(x: 0.0, y: 0.8)
            gradientLayer.frame = animateView.bounds
            animateView.layer.mask = gradientLayer

            let animation = CABasicAnimation(keyPath: "transform.translation.x")
            animation.duration = 1.5
            animation.fromValue = -animateView.frame.size.width
            animation.toValue = animateView.frame.size.width
            animation.repeatCount = .infinity

            gradientLayer.add(animation, forKey: "")
        }
    }

To stop the animation just remove all the animation from layer and nil mask layer.

func stopAnimation() {
        for animateView in getSubViewsForAnimate() {
            animateView.layer.removeAllAnimations()
            animateView.layer.mask = nil
        }
    }

Please take a look at code. Feel free to ask any query/doubts in below commenting section.


Sceneform SDK : A boon for Android developers

$
0
0

Overview

With AR being in trend, is it just me or is using a 2D and stationary image too mainstream now-a-days? 😀

ARCore brings Sceneform SDK which is capable to scan images and load 3D models accordingly which allow users to interact with them using gestures. They call this feature – Augmented Images. Not only this but a lot of other AR stuff can be done with Sceneform SDK and in a lot easier way than ever imagined.

Sceneform Overview

Google announced Sceneform SDK in Google I/O 2018. It handles all the 3D graphics, OpenGL and such complex stuff by itself allowing an Android developer to easily develop the AR apps with lesser lines of code. It requires a plugin to be installed and an Android Studio with version 3.1 or above to run.

Sceneform Capabilities

Being in beta, along with Augmented Images, it provides basic AR functionality like moving, rotating and scaling a 3D model.

It has a functionality called cloud anchors wherein two/multiple users place a 3D model each using their respective devices in a single frame (same environment) and both the models can be viewed from both the devices. Models can also interact with each other. That’s cool, right?

Now, here comes my favourite part… (drum rolls)
Being a native android developer, the functionality that I find most interesting, is that it can convert a layout/app-screen into a renderable and can load it as a 3D model into physical environment. Can’t stop thinking about unending varieties of applications that can be developed based on this concept !

sceneform-image1

Without using any other minute in dreaming about Sceneform enabled app possibilities :D, let’s dive into practical scenarios to have a look at its performance.

Basic information to get started

To start with, we will need it’s plugin to be installed. Go to Preferences and search for sceneform as shown below. Install it and restart the studio.

sceneform-image2

3D models can be downloaded from Google’s own website – https://poly.google.com. Sceneform supports 3D models with .obj, .gltf and .fbx extensions. SDK has its own extensions for models. It converts them into .sfa and .sfb formats.

.sfb (Sceneform Binary asset) is the actual model that is to be loaded into app and .sfa (Sceneform Asset Definition) is human-readable description of .sfb file.

Below is an example for .sfa file.

sceneform-image3

It stores information of the model like its scale size, textures to be loaded and other material properties. More information regarding .sfa attributes can be found at https://developers.google.com/ar/develop/java/sceneform/sfa

3D model can be converted into these formats just by right clicking on them and selecting Import Sceneform Asset. This will open a dialog wherein we can specify the output locations.

sceneform-image4

Plugin also provides a viewer to view the model in studio without running the app. It’s something that I craved for while using ARCore’s older SDKs. 😀

sceneform-image5

Practical

In our demo, we’ll concentrate on converting a layout/screen into a 3D model. We’ll develop an app wherein we’ll scan an image (Yudiz team’s picture) which will pop up 3 buttons or tappable icons in 3D to redirect user into respective screens when clicked.

Below is the image that I’ll use.

sceneform-image6

Remember : The image should be unique enough to get identified by the SDK.
Store it in assets folder.

Let’s have a look at the other required resource.

sceneform-image7

This is the layout that will pop up when image gets detected by SDK. You can design any layout based on your requirements.

Now, skipping the explanation of boilerplate code that will be needed to detect the supported devices and to initialize the ARCore fragment, let’s have a look at the core functionality.

private boolean setupAugmentedImageDb(Config config) {
   AugmentedImageDatabase augmentedImageDatabase;

   Bitmap augmentedImageBitmap = loadAugmentedImage();
   if (augmentedImageBitmap == null) {
       return false;
   }

   augmentedImageDatabase = new AugmentedImageDatabase(session);
   augmentedImageDatabase.addImage("picTeamYudiz", augmentedImageBitmap);

   config.setAugmentedImageDatabase(augmentedImageDatabase);
   return true;
}

private Bitmap loadAugmentedImage() {
   try (InputStream is = getAssets().open(picTeamYudiz + ".png")) {
       return BitmapFactory.decodeStream(is);
   } catch (IOException e) {
       Log.e(TAG, "IO exception loading augmented image bitmap.", e);
   }
   return null;
}

We need to create an AugmentedImageDatabase to store the images with unique names.

private void onUpdateFrame(FrameTime frameTime) {
   Frame frame = arSceneView.getArFrame();
   Collection updatedAugmentedImages =
           frame.getUpdatedTrackables(AugmentedImage.class);

   if (node == null)
       node = new AugmentedImageNode(this);

   for (AugmentedImage augmentedImage : updatedAugmentedImages) {
       if (augmentedImage.getTrackingState() == TrackingState.TRACKING)
           if (augmentedImage.getName().equals("picTeamYudiz")) {
               node.setImage(augmentedImage);
               arSceneView.getScene().addChild(node);
           }
   }

}

This method gets fired whenever screen frame is updated.

Collection<AugmentedImage> updatedAugmentedImages =
       frame.getUpdatedTrackables(AugmentedImage.class);

This code is used to fetch all the images stored in augmented database.

for (AugmentedImage augmentedImage : updatedAugmentedImages) {
   if (augmentedImage.getTrackingState() == TrackingState.TRACKING)
       if (augmentedImage.getName().equals("picTeamYudiz")) {
           node.setImage(augmentedImage);
           arSceneView.getScene().addChild(node);
       }
}

Here, for loop is used to check where any of the fetched images is same as that we stored in DB.
When this condition is satisfied, the layout is converted into a renderer and gets added in ArScene. This is shown in below code.

public void setImage(AugmentedImage image) {
   this.image = image;

   CompletableFuture<ViewRenderable> viewCompFuture =
           ViewRenderable.builder().setView(context, R.layout.layout_renderable).build();

   CompletableFuture.allOf(viewCompFuture)
           .handle((notUsed, throwable) -> {
               try {
                   renderableView = viewCompFuture.get();
               } catch (InterruptedException e) {
                   e.printStackTrace();
               } catch (ExecutionException e) {
                   e.printStackTrace();
               }

               return null;
           });

   setAnchor(image.createAnchor(image.getCenterPose()));

   Node solarControls = new Node();
   solarControls.setParent(this);
   solarControls.setLocalPosition(new Vector3(0.0f, 0.0f, 0.0f));
   solarControls.setRenderable(renderableView);

   View renderableLayout = renderableView.getView();

   listeners(renderableLayout);

}

private void listeners(View renderableLayout) {
   renderableLayout.findViewById(R.id.ivContactUs).setOnClickListener(this);
   renderableLayout.findViewById(R.id.ivYudiz).setOnClickListener(this);
   renderableLayout.findViewById(R.id.ivLinkedIn).setOnClickListener(this);
}

Here, a CompletableFuture object is created with layout which ultimately will provide a renderable.
I have obtained a view from renderable to find the IDs of the elements and to set click listeners for them.

That’s it. We have successfully added interactions to the image. Yay ! 😀

Check out the git repository for better understanding.
https://gitlab.com/YudizSumeet/augmented-images.git

Video Description

Google’s Sceneform SDk brings the marker detection feature called Augmented Images. Scanning a unique image to load 3D models in the virtual environments can be used in various ways for business and personal purposes. Here is one such demo.

Application ideas

An application for I-card can be developed using this feature. A card, which has human unreadable content like QR code, can be scanned and actual information can be fetched to show it in 3D using ARCore.

Conclusion

Sceneform SDK is not less than a boon for Android developers who are eager to learn AR. Being so powerful in beta version, I’m eager to see what its future features will comprise of.

The Simple Steps To Virtual Object Interaction Using ARKit.

$
0
0

Overview

ARKit is basically a scaffold between this present reality and the virtual items. It gives the choice to communicate amongst real and virtual items. This demo application gives how to place an object and how to communicate with your virtual items utilizing gestures and hit testing.

Prerequisites

  • Xcode 9.3
  • iOS 11.3
  • Device with A9 processor

Project Setup

Open Xcode and create a new project. Choose “Augmented Reality App” and fill the required details.

interaction-image1

Apple provide options for Content Technology like SceneKit, SpriteKit, and Metal. Here we will choose SceneKit. If you want to place any 3D object model then Xcode needs to read that 3D object file in SceneKit supported format(.scn) or .dae.

ARKit is a session-based framework. The session contains a scene that renders virtual objects in the real world. For that, ARKit needs to use an iOS device’s camera. So you have to add this to your info.plist file.

Privacy – Camera Usage Description.

interaction-image2

Now in here, we need to set couple of IBOutlet as below with ARSCNView and UILabel, infoLabel for acknowledging user about AR session states and any updates of the node.

@IBOutlet var sceneView: ARSCNView!
    @IBOutlet var infoLabel: UILabel!

interaction-image3

For debugging purpose you can set sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints] and can see how ARKit detects surface. When you run the app, you should see a lot of yellow dots on the scene. These are feature points and it is helpful to estimate properties like the orientation and position of physical objects in the current base environment. The more feature points in the area, the better chance ARKit can determine and track the environment.

override func viewDidLoad() {
        ….      
        // Show statistics such as fps and timing information
        sceneView.showsStatistics = true

        sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]

       let scene = SCNScene()
       sceneView.scene = scene
       ....
       }

Now it’s time to set a world-tracking session with a horizontal plane. As you can see in your viewWillAppear method session has been already created and set to run.

configuration.planeDetection = .horizontal

So now your method will look like this.

override func viewWillAppear(_ animated: Bool) {
        ...
        // Create a session configuration
        let configuration = ARWorldTrackingConfiguration()
        configuration.planeDetection = .horizontal
        sceneView.session.run(configuration)
        ...
    }

Detect plane and place object

When we detect any surface in ARKit, it will provide ARPlaneAnchor an object. An ARPlaneAnchor object is basically containing information about position & orientation of a real world detected surface.

To know when surface will detect, update or remove , use ARSCNViewDelegate methods which looks like a magic in ARKit. Implement following ARSCNViewDelegate methods so you will be notified when an update is available in sceneview.

override func viewDidLoad() {
        ...      
        sceneView.delegate = self
        ...
 }

 // MARK: - ARSCNView delegate

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        // Called when any node has been added to the anchor
    }

 func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) {
        // This method will help when any node has been removed from sceneview
    }

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        // Called when any node has been updated with data from anchor
    }

 func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
        // help us inform the user when the app is ready
    }

ARSessionDelegate protocol provides current tracking state of the camera so you are able to know that your app is ready to detect or not. When you are getting a normal state, you are ready to detect plane. For that implement these delegates.

// MARK: - ARSessionObserver

    func sessionWasInterrupted(_ session: ARSession) {
        infoLabel.text = "Session was interrupted"
    }

    func sessionInterruptionEnded(_ session: ARSession) {
        infoLabel.text = "Session interruption ended"
        resetTracking()
    }

    func session(_ session: ARSession, didFailWithError error: Error) {
        infoLabel.text = "Session failed: \(error.localizedDescription)"
        resetTracking()
    }

     func resetTracking() {
        let configuration = ARWorldTrackingConfiguration()
        configuration.planeDetection = .horizontal
        sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
    }

func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) {
        // help us inform the user when the app is ready
        switch camera.trackingState {
        case .normal :
            infoLabel.text = "Move the device to detect horizontal surfaces."

        case .notAvailable:
            infoLabel.text = "Tracking not available."

        case .limited(.excessiveMotion):
            infoLabel.text = "Tracking limited - Move the device more slowly."

        case .limited(.insufficientFeatures):
            infoLabel.text = "Tracking limited - Point the device at an area with visible surface detail."

        case .limited(.initializing):
            infoLabel.text = "Initializing AR session."

        default:
            infoLabel.text = ""
        }
    }

When plane has been detected, add object onto it. Here we are going to add 3D model named “Shoes_V4.dae”.

class ViewController: UIViewController, ARSessionDelegate {
  ...
  var object: SCNNode!
  ...

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        // Called when any node has been added to the anchor
        ...
        guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
        DispatchQueue.main.async {
            self.infoLabel.text = "Surface Detected."
        }

        let shoesScene = SCNScene(named: "Shoes_V4.dae", inDirectory: "Model.scnassets")
        object = shoesScene?.rootNode.childNode(withName: "group1", recursively: true)
        object.simdPosition = float3(planeAnchor.center.x, planeAnchor.center.y, planeAnchor.center.z)
        sceneView.scene.rootNode.addChildNode(object)
        node.addChildNode(object)
        ...
    }
}

You can get a childNodes name from here. Here it is “group1“.

interaction-image4

Now build and run your app. You can see that some surfaces show more feature points and in some area, you can not get much better result compared to others. Surfaces which are shiny or one colored make it difficult for the ARKit to obtain a strong reference point for plane detection and to be able to determine unique points in the environment. If you are unable to see more feature points then move your device around the area and try to detect it with different objects or surfaces. Once ARKit is ready with the detected plane then your object will be added on it.

Change position of object to tap location with UITapGestureRecognizer

For placing an object on tap first add UITapGestureRecognizer in scene view.

override func viewDidLoad() {
        ...        
        let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
        sceneView.addGestureRecognizer(tapGesture)
        ...       
}

Then in the handling of tap gesture add a node at tap position. A node represents the position and the coordinates of an object in a 3D space. Here we set a position of the node to tap position.

@objc
    func didTap(_ gesture: UIPanGestureRecognizer) {
       guard let _ = object else { return }

        let tapLocation = gesture.location(in: sceneView)
        let results = sceneView.hitTest(tapLocation, types: .featurePoint)

        if let result = results.first {
            let translation = result.worldTransform.translation
            object.position = SCNVector3Make(translation.x, translation.y, translation.z)
            sceneView.scene.rootNode.addChildNode(object)
        }
    }

For getting the translation of worldTransform add this extension.

extension float4x4 {
    var translation: float3 {
        let translation = self.columns.3
        return float3(translation.x, translation.y, translation.z)
    }
}

Scaling object with UIPinchGestureRecognizer

For zoom-in and zoom-out 3D object we have to change a scale of object while user pinch. For recognize when user pinch on sceneview, add UIPinchGestureRecognizer.

override func viewDidLoad() {
        ...        
        let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(didPinch(_:)))
        sceneView.addGestureRecognizer(pinchGesture)
        ...       
}

Here we set a maximum zoom scale as 2(200% zoom-out then original object) and minimum scale as 0.5(50% zoom-in then original object). You can set it according to your necessities.

@objc
    func didPinch(_ gesture: UIPinchGestureRecognizer) {
        guard let _ = object else { return }
        var originalScale = object?.scale

        switch gesture.state {
        case .began:
            originalScale = object?.scale
            gesture.scale = CGFloat((object?.scale.x)!)
        case .changed:
            guard var newScale = originalScale else { return }
            if gesture.scale < 0.5{ newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5) }else if gesture.scale > 2{
                newScale = SCNVector3(2, 2, 2)
            }else{
                newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale)
            }
            object?.scale = newScale
        case .ended:
            guard var newScale = originalScale else { return }
            if gesture.scale < 0.5{ newScale = SCNVector3(x: 0.5, y: 0.5, z: 0.5) }else if gesture.scale > 2{
                newScale = SCNVector3(2, 2, 2)
            }else{
                newScale = SCNVector3(gesture.scale, gesture.scale, gesture.scale)
            }
            object?.scale = newScale
            gesture.scale = CGFloat((object?.scale.x)!)
        default:
            gesture.scale = 1.0
            originalScale = nil
        }
    }

Rotate object using UIPanGestureRecognizer

For rotation of any object using pan gesture add UIPanGestureRecognizer in sceneview.

override func viewDidLoad() {
        ...        
        let panGesture = UIPanGestureRecognizer(target: self, action: #selector(didPan(_:)))
        panGesture.delegate = self
        sceneView.addGestureRecognizer(panGesture)
        ...       
}

class ViewController: UIViewController, ARSessionDelegate {
    ...
    var currentAngleY: Float = 0.0
    ...

    @objc
    func didPan(_ gesture: UIPanGestureRecognizer) {
        guard let _ = object else { return }
        let translation = gesture.translation(in: gesture.view)
        var newAngleY = (Float)(translation.x)*(Float)(Double.pi)/180.0

        newAngleY += currentAngleY
        object?.eulerAngles.y = newAngleY

        if gesture.state == .ended{
            currentAngleY = newAngleY
        }
    }
}

You can also rotate object using UIRotationGestureRecognizer. But this will recognize a rotation using two fingers. Here we used only one finger to rotate object in sceneview.

Thanks for coming.
If you have enjoyed and learned something valuable from this tutorial, please let me know by sharing this tutorial with your friends.

Chatbots is the future!

$
0
0

Overview

The advancement of man-made brainpower is presently going all out and chatbots are just a swoon sprinkle on a gigantic rush of advance. Today a lot of business are applying chatbots with WhatsApp, Slack, Skype and the Facebook Messenger alone has 1.2 billion monthly users and still increasing. With the spread of messengers, virtual chatbots are emulating human discussions for fathoming different assignments, and chatbots becoming progressively popular.

What is ChatBot and why do you need it.

chatbot-bg

Chatbot is an artificial intelligence (AI) program that simulates interactive human conversation by using key pre-calculated user phrases and auditory or text-based signals. According to techopedia, Chatbots are frequently used for basic customer service and marketing systems like social networking hubs and instant messaging (IM) clients. They are also often included in operating systems as intelligent virtual assistants. As indicated by a current report by Grand View Research, the worldwide chatbot advertise is relied upon to reach $1.23 billion by 2025, an exacerbated yearly development rate (CAGR) of 24.3%. Inside the worldwide chatbot advertise, around 45% of end clients incline toward chatbots as the essential method of correspondence for client benefit asks.

chatbot-chat

Oracle surveys says that, 80% of businesses want chatbots by 2020.

What kind of chatbot can be?

To give idea on creating chatbot may not be my relative argument (It depends on your business domain and how it helps your business) but still let’s see what kind of chatbots are available in market and what kind of chatbots are possible to build.

  • E- Commerce
  • Online Marketing
  • Customer Service
  • Travel, Movies
  • Hospitality
  • Banking
  • Financial Services
  • HR & Recruiting
  • Assistant For appointment
  • IOT & Voice Based

What’s good and bad about chatbots?

  • Fast and Accurate
  • Works 24*7 without taking break
  • Customer satisfaction
  • Automation of repetitive work
  • Bots are not equal to google…
    Without knowing bot’s type you can’t ask anything or non-relevant questions to it. It may be possible some smart bot would insult you with sarcasm.

How to Build Chatbot ?

  • Analyze the problem and why did you choose the chatbot as a solution?
  • Choose the Platform on which platform you want to integrate your bot?
    • Facebook Messenger
    • Whatsapp
    • Slack
    • Skype
    • Alexa
    • Cortana
    • Google Assistant
    • Twilio
    • Telegram
  • Choosing NLP engine is most essential part of building bot. Your bot should be able to understand natural languages that human speaks. Popular NLP engines are,
  • That’s It. Now Create, Customize, Test and Launch.

Conclusion : –

In conclusion, there are plenty of advantages identified in having a chatbot. Some of which have been portrayed in this post. Organizations who have the capacity to spend a considerable measure of money by which they can acquire more income with the assistance of these bots that work 24*7 without having a break. Furthermore, with new head ways coming each year like AI, NLP and Machine Learning which makes the bot more clever to a point where it doesn’t need to be guided by a representative, this isn’t too far away. Thus, it is sheltered to state that the eventual fate of chatbots is brilliant and shining.

Image Recognition and Tracking Using ARKit-2

$
0
0

Overview

Hello Everyone, Hope you doing well and Having a nice day. In this article, we are going to focus on Image Recognition and Tracking using ARKit 2. At WWDC 2018, Apple announced lots of features in ARKit 2 and much improved known 2D images detection in the user’s environment with using their positions to place an AR content.

Following video will give a brief idea about today’s article.

Prerequisites:

Before any implementation, We need to take care of below requirements

  • Xcode 10 (beta or above)
  • iOS 12 (beta or above)
  • iPhone 6S or above iDevice.

Provide your reference images

To provide images reference, we need to add them to our project asset catalog in Xcode.

  • Open asset catalog in project, click left corner (+) or you can use right click to add new AR Resource folder group and rename as per your requirement.
  • Drag the images from finder to newly created folder
  • For all individual images we need to set dimensions using inspector.

arkit2-image1

Configuration image tracking

We need to create ARImageTrackingConfiguration, this will allow to track our reference images in the user’s environment. After that, create an instance of ARReferenceImage containing the reference of images from the AR Resource folder as per mine I have named it “iOSDept”. The property maximumNumberOfTrackedImages will set the maximum number of tracked images in given frame, default value is one.

func configureARImageTracking() {
        // Create a session configuration
        let configuration = ARImageTrackingConfiguration()
        if let imageTrackingReference = ARReferenceImage.referenceImages(inGroupNamed: "iOSDept", bundle: Bundle.main) {
            configuration.trackingImages = imageTrackingReference
            configuration.maximumNumberOfTrackedImages = 1
        } else {
            print("Error: Failed to get image tracking referencing image from bundle")
        }
        // Run the view's session
        sceneView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
    }

Image Detection

Now we are going to update the method renderer(_:didAdd:for:) because anchor in this method is type of ARImageAnchor.

Casting down ARAnchor to ARImageAnchor using if statement. The object imageAnchor contain the property of reference images which we have placed in an asset catalog under “iOSDept” folder. SCNPlane is set of action, animating fadeIn and fadeOut

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        /// Casting down ARAnchor to `ARImageAnchor`.
        if let imageAnchor =  anchor as? ARImageAnchor {
            let imageSize = imageAnchor.referenceImage.physicalSize

            let plane = SCNPlane(width: CGFloat(imageSize.width), height: CGFloat(imageSize.height))
            plane.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)

            let imageHightingAnimationNode = SCNNode(geometry: plane)
            imageHightingAnimationNode.eulerAngles.x = -.pi / 2
            imageHightingAnimationNode.opacity = 0.25
            node.addChildNode(imageHightingAnimationNode)
       } else {
            print("Error: Failed to get ARImageAnchor")
       }
}

Load and animate Spritekit scene

Here I’ve used SpriteKitScene named “About” and initializing the object of it. Make sure the property isPaused should be false for the animating containing node in it. To load SKScene in user environment, we need to create an object of SCNPlane which contains firstMaterial property and its sub properties. Thus pass the object aboutSpriteKitScene to aboutUsPlane.firstMaterial?.diffuse.contents to load the SpriteKitScene in real world.

Animating the SKNode are pretty simple just use SKAction and their methods.

// About
                let aboutSpriteKitScene = SKScene(fileNamed: "About")
                aboutSpriteKitScene?.isPaused = false
                
                let aboutUsPlane = SCNPlane(width: CGFloat(imageSize.width * 1.5), height: CGFloat(imageSize.height * 1.2))
                aboutUsPlane.firstMaterial?.diffuse.contents = aboutSpriteKitScene
                aboutUsPlane.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)
                
                let aboutUsNode = SCNNode(geometry: aboutUsPlane)
                aboutUsNode.geometry?.firstMaterial?.isDoubleSided = true
                aboutUsNode.eulerAngles.x = -.pi / 2
                aboutUsNode.position = SCNVector3Zero
                node.addChildNode(aboutUsNode)
                
                let moveAction = SCNAction.move(by: SCNVector3(0.25, 0, 0), duration: 0.8)
                aboutUsNode.runAction(moveAction, completionHandler: {
                    let titleNode = aboutSpriteKitScene?.childNode(withName: "TitleNode")
                    titleNode?.run(SKAction.moveTo(y: 90, duration: 1.0))
                    
                    let name = aboutSpriteKitScene?.childNode(withName: "DescriptionNode")
                    name?.run(SKAction.moveTo(y: -30, duration: 1.0))
})

Load and animate 3D Model on reference image

Loading and animating 3D Model on reference image is simple. Place your 3D model arts.scnassets folder containing texture images. Create an object of SCNScene using the 3D model name and use the firstNode property to set angles, scale and their other respective properties as per requirement. Last add the object of firstNode to the did add node method.

// Logo Related
let logoScene = SCNScene(named: "art.scnassets/Yudiz_3D_Logo.dae")!
let logoNode = logoScene.rootNode.childNodes.first!
logoNode.scale = SCNVector3(0.022, 0.022, 0.022)
logoNode.eulerAngles.x = -.pi / 2
logoNode.position = SCNVector3Zero
logoNode.position.z = 0.05
let rotationAction = SCNAction.rotateBy(x: 0, y: 0, z: 0.5, duration: 1)
let inifiniteAction = SCNAction.repeatForever(rotationAction)
logoNode.runAction(inifiniteAction)
node.addChildNode(logoNode)

Conclusion : –

I hope you like this article and also learned some valuable things from it. Feel free to share with your social network. For the more reference, you can download this project on Github. Also, I welcome your contribution in this project.

Chatbot for Facebook Messenger using dialogflow and Node.js: Part1

$
0
0

Overview

Here we are going to build simple facebook messenger chatbot using dialogflow and Node.js. we won’t be going deep into it but we will cover all kind of responses that messenger platform supports like a generic template, receipt, button, media, list and graph.

Prerequisite

  • Facebook Page
  • Facebook Developer Account
  • Understanding of Dialogflow
  • Knowledge of Node.js

Getting Started

Let’s start by creating Facebook App from Facebook Developer Account.

chatbot-fb-image1

You will be redirected to the Dashboard, Add a Messenger Product from there

chatbot-fb-image2

After setting up, select your Facebook page in Token Generation and Generate Token from there.

chatbot-fb-image3

Setting Up Server

Creating a simple server in node.js is easy.

Navigate to your desired folder where you are going to setup this project. Open that folder in your Terminal and run npm init this will generate the package.json file.

Now, we have to install our dependencies,
Run npm i apiai axios body-parser express uuid –save

After installing dependencies create index.js file and import that dependencies that we just installed and create simple express server. Also, make one config.js file so we can store our credentials in that file for better code management and security purpose or instead of config.js you can also create env file.

Index.js

const apiai = require("apiai");
const express = require("express");
const bodyParser = require("body-parser");
const uuid = require("uuid");
const axios = require('axios');

//Import Config file
const config = require("./config");

//setting Port
app.set("port", process.env.PORT || 5000);

//serve static files in the public directory
app.use(express.static("public"));

// Process application/x-www-form-urlencoded
app.use(
  bodyParser.urlencoded({
    extended: false
  })
);

// Process application/json
app.use(bodyParser.json());

// Index route
app.get("/", function (req, res) {
  res.send("Hello world, I am a chat bot");
});

// for Facebook verification
app.get("/webhook/", function (req, res) {
  console.log("request");
  if (
    req.query["hub.mode"] === "subscribe" &&
    req.query["hub.verify_token"] === config.FB_VERIFY_TOKEN
  ) {
    res.status(200).send(req.query["hub.challenge"]);
  } else {
    console.error("Failed validation. Make sure the validation tokens match.");
    res.sendStatus(403);
  }
});

// Spin up the server
app.listen(app.get("port"), function () {
  console.log("Magic Started on port", app.get("port"));
});

Spin up the server. We need to make communication live so here we are using ngrok to make our localhost live but you can always use any other platform like heroku, localtunnel or any other third party services.

To make our server live type ./ngrok http 5000 in your terminal which will give you live URL. But make sure while running above command you have ngrok downloaded for Your Suitable OS. Ngrok file should be in your current working directory to successfully execute above command.

Config.js

module.exports = {
  FB_PAGE_TOKEN: "Page Access Token",
  FB_VERIFY_TOKEN: "Facebook Verification code for Webhook",
  API_AI_CLIENT_ACCESS_TOKEN: "DialogFlow token",
  FB_APP_SECRET: "Facebook Secret Code",
};

FB_PAGE_TOKEN : Copy the Page Access token that we generated and paste in config.js.
FB_APP_SECRET : You will find App Secret on Settings<

Now click on set up webhook you will find that just below token generation window.
Paste your server URL with endpoint /webhook and Verify Token can be anything and check messages and messaging_postbacks.

When You click on Verify and Save You will receive verification GET request from Facebook. (make sure you copy https:// URL)

FB_VERIFY_TOKEN : Paste the verify token in your config.js file.
SERVER_URL : Copy your ngrok live URL and paste.

chatbot-fb-image4

Dialogflow integration

Now let’s configure dialogflow with our webhook code. Add New agent and Select v1 API and copy client access token and paste in API_AI_CLIENT_ACCESS_TOKEN.

chatbot-fb-image5

Let’s create intent in dialogflow.

  1. Add intent from the left sidebar.
  2. Give an Intent Name: Send-text
  3. Add Training Phrases “Hey, send me an example of a text message” or relevant to it.
  4. Add Action Name “send-text”
  5. Save it.
  6. Now do the same thing for send-image, send-media, send-list, send-receipt, send-Quick Reply, send-graph, send-carouselMake sure you give the unique action to all intent. We need to identify the intent of a user to send the appropriate response from our webhook server.chatbot-fb-image6Click on the Fulfillment tab and add your webhook endpoint here and save it.chatbot-fb-image7
  7. That’s it nothing more in Dialogflow for this example.

If you are not familiar with dialogflow please read the documentation.

Let’s come back to the index.js add this code snippet to connect with Dialogflow.

const apiAiService = apiai(config.API_AI_CLIENT_ACCESS_TOKEN, {
  language: "en",
  requestSource: "fb"
});
const sessionIds = new Map();

Setup Webhook Endpoint

Now when user will send some message on facebook we will receive the post request on the our node server. So we need to handle that /webhook endpoint.

/*
 * All callbacks for Messenger are POST-ed. They will be sent to the same
 * webhook. Be sure to subscribe your app to your page to receive callbacks
 * for your page. 
 * https://developers.facebook.com/docs/messenger-platform/product-overview/setup#subscribe_app
 *
 */
app.post("/webhook/", function (req, res) {
  var data = req.body;
  // Make sure this is a page subscription
  if (data.object == "page") {
    // Iterate over each entry
    // There may be multiple if batched
    data.entry.forEach(function (pageEntry) {
      var pageID = pageEntry.id;
      var timeOfEvent = pageEntry.time;

      // Iterate over each messaging event
      pageEntry.messaging.forEach(function (messagingEvent) {
        if (messagingEvent.message) {
          receivedMessage(messagingEvent);
        } else {
          console.log("Webhook received unknown messagingEvent: ",messagingEvent);
        }
      });
    });
    // Assume all went well.
    // You must send back a 200, within 20 seconds
    res.sendStatus(200);
  }
});

Messages, Messaging_postbacks these two events that we checked while setting up webhook. (we are not using postback event here)
receivedMessage(messagingEvent) let’s make this function now,

function receivedMessage(event) {
  var senderID = event.sender.id;
  var recipientID = event.recipient.id;
  var timeOfMessage = event.timestamp;
  var message = event.message;

  if (!sessionIds.has(senderID)) {
    sessionIds.set(senderID, uuid.v1());
  }

  var messageId = message.mid;
  var appId = message.app_id;
  var metadata = message.metadata;

  // You may get a text or attachment but not both
  var messageText = message.text;
  var messageAttachments = message.attachments;

  if (messageText) {
    //send message to api.ai
    sendToApiAi(senderID, messageText);
  } else if (messageAttachments) {
    handleMessageAttachments(messageAttachments, senderID);
  }
}

If you console the event you will get JSON like,

chatbot-fb-image8

For now, Just Focus on sender.id and What message.text.
If there is messageText in receivedMessage() function then we will call sendToApiAi().

In this function, we will first call sendTypingOn() to show that bot is typing in Messenger.

function sendToApiAi(sender, text) {
  sendTypingOn(sender);
  let apiaiRequest = apiAiService.textRequest(text, {
    sessionId: sessionIds.get(sender)
  });

  apiaiRequest.on("response", response => {
    if (isDefined(response.result)) {
      handleApiAiResponse(sender, response);
    }
  });

  apiaiRequest.on("error", error => console.error(error));
  apiaiRequest.end();
}

Send Typing On

SendTyping will call the Facebook send API to send the Typing Action.

/*
 * Turn typing indicator on
 *
 */
const sendTypingOn = (recipientId) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    sender_action: "typing_on"
  };
  callSendAPI(messageData);
}

callSendAPI() function will send the message data that we are generating. (here we are sending the typing on action)

/*
 * Call the Send API. The message data goes in the body. If successful, we'll 
 * get the message id in a response 
 *
 */
const callSendAPI = async (messageData) => {

const url = "https://graph.facebook.com/v3.0/me/messages?access_token=" + config.FB_PAGE_TOKEN;
  await axios.post(url, messageData)
    .then(function (response) {
      if (response.status == 200) {
        var recipientId = response.data.recipient_id;
        var messageId = response.data.message_id;
        if (messageId) {
          console.log(
            "Successfully sent message with id %s to recipient %s",
            messageId,
            recipientId
          );
        } else {
          console.log(
            "Successfully called Send API for recipient %s",
            recipientId
          );
        }
      }
    })
    .catch(function (error) {
      console.log(error.response.headers);
    });
}

Let’s come back to sendToApiAi() function next is we are calling isDefined() function to just make sure we are receiving the proper response.

const isDefined = (obj) => {
  if (typeof obj == "undefined") {
    return false;
  }
  if (!obj) {
    return false;
  }
  return obj != null;
}

In the same function sendToApiAi() we will get the response from the Dialogflow in form of JSON.

chatbot-fb-image9

Send that data to the handleApiAiResponse().

function handleApiAiResponse(sender, response) {
  let responseText = response.result.fulfillment.speech;
  let responseData = response.result.fulfillment.data;
  let messages = response.result.fulfillment.messages;
  let action = response.result.action;
  let contexts = response.result.contexts;
  let parameters = response.result.parameters;

  sendTypingOff(sender);

 if (responseText == "" && !isDefined(action)) {
    //api ai could not evaluate input.
    console.log("Unknown query" + response.result.resolvedQuery);
    sendTextMessage(
      sender,
      "I'm not sure what you want. Can you be more specific?"
    );
  } else if (isDefined(action)) {
    handleApiAiAction(sender, action, responseText, contexts, parameters);
  } else if (isDefined(responseData) && isDefined(responseData.facebook)) {
    try {
      console.log("Response as formatted message" + responseData.facebook);
      sendTextMessage(sender, responseData.facebook);
    } catch (err) {
      sendTextMessage(sender, err.message);
    }
  } else if (isDefined(responseText)) {
    sendTextMessage(sender, responseText);
  }
}

Send Typing Off

Remember? we started Typing on Action on Messenger now we have response to turn it off call the sendTypingOff() function.

/*
 * Turn typing indicator off
 *
 */
const sendTypingOff = (recipientId) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    sender_action: "typing_off"
  };

  callSendAPI(messageData);
}

Send Text Message

Whenever we get unknown query from user we have to send Default message to that user.

const sendTextMessage = async (recipientId, text) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      text: text
    }
  };
  await callSendAPI(messageData);
}

Above function will call facebook send API and send the text message that we defined as default.

Now, if the Intent of user is matched with dialogflow we will get the action(action we are getting from dialogflow response) of that intent, based on the action we will send the response to each user.

When user asks : “Send me an example of text message” -> our intent “send-text” will get called and based on intent we will get the unique action of it. In my case I gave same action name as intent name.

If we get Action from dialogflow response, we are calling the handleApiAiAction().

function handleApiAiAction(sender, action, responseText, contexts, parameters) {
   switch (action) {
    case "send-text":
      var responseText = "This is example of Text message."
      sendTextMessage(sender, responseText);
      break;
    default:
      //unhandled action, just send back the text
    sendTextMessage(sender, responseText);
  }
}

chatbot-fb-image10

Conclusion : –

This is how you can interact with users by sending simple text message from the webhook server. Next time we will look at the rich messages like images, videos, quick-reply and receipt templates.
Comment your ideas about chatbots, we will try to build together.

Chatbot for Facebook Messenger using dialogflow and Node.js: Part2

$
0
0

Overview

Here we are going to build simple facebook messenger chatbot using dialogflow and Node.js. we won’t be going deep into it but we will cover all kind of responses that messenger platform supports like a generic template, receipt, button, media, list and graph. I hope you’ve gone through first part of this blog Part-1 it includes basic server configuration and how to send simple responses like text messages.

Prerequisite

  • Facebook Page
  • Facebook Developer Account
  • Understanding of Dialogflow
  • Knowledge of Node.js

Getting Started

We have seen how to send simple text message from the webhook server using Facebook Send API.
Now, we will look at Rich responses like card, receipts, videos, image, Buttons. But before getting started make sure you have everything working from previous blog and all intent set up with unique action in dialogflow.

Send Image

chatbot-fb2-image1

When user asks ‘send me image’ out intent from dialogflow ‘send-photo’ will get triggered and based on the intent we will get the action of that specific intent.

Make a new case in switch statement in index.js file

Index.js

case "fb-send-image":
     var imgUrl = "https://mir-s3-cdn-cf.behance.net/project_modules/max_1200/881e6651881085.58fd911b65d88.png";
      sendImageMessage(sender, imgUrl);
break;

Now make new function that can send Image.

const sendImageMessage = async (recipientId, imageUrl) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "image",
        payload: {
          url: imageUrl
        }
      }
    }
  };
    await callSendAPI(messageData);
}

callSendAPI() function we made last time it will send messageData to send API and user will get Image as response.

Send Video or Media template

chatbot-fb2-image2

When user asks ‘send me video’ out intent from dialogflow ‘send-video’ will get triggered and based on the intent we will get the action of that specific intent.

case "send-video":
    const messageData = [
        {
            "media_type": "video",
            "url": "https://www.facebook.com/FacebookIndia/videos/1772075119516020/",
            "buttons": [
                {
                    "type": "web_url",
                    "url": "https://f1948e04.ngrok.io",
                    "title": "View Website",
                }
            ]
        }
    ]
    sendVideoMessage(sender, messageData);
break;

You can also specify button to the video or image you are sending, for more information look at media template.

Note: In this media template you only can send images and videos that are posted on facebook. You can not send images or video from other sources. If you are sending image then you have to give media_type as “image” and for video message media_type will be “video”

Let’s create function to send the video to user.

const sendVideoMessage = async (recipientId, elements) => {
  const messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: {
          template_type: "media",
          elements: elements
        }
      }
    }
  };
await callSendAPI(messageData)
}

callSendAPI() function will send messageData to send API and user will get Video or images as response.

Send Quick Replies

chatbot-fb2-image3

Sending Quick replies comes in handy it’s easy. We use quick replies to request a person’s location, email address, and phone number. When you tap on quick reply button it will get disappeared and the title of that button will posted to the conversation as a message.

Add a new case in Switch statement,

case "send-quick-reply":
    var responseText = "Choose the options"
    var replies = [{
        "content_type": "text",
        "title": "Example 1",
        "payload": "Example 1",
    },
    {
        "content_type": "text",
        "title": "Example 2",
        "payload": "Example 2",
    },
    {
        "content_type": "text",
        "title": "Example 3",
        "payload": "Example 3",
    }];
    sendQuickReply(sender, responseText, replies)
break;

Now let’s create the function to send those replies,

const sendQuickReply = async (recipientId, text, replies, metadata) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      text: text,
      metadata: isDefined(metadata) ? metadata : "",
      quick_replies: replies
    }
  };
  await callSendAPI(messageData);
}

callSendAPI() function will send messageData to send API and user will get Quick replies as response.

Send Generic template & Carousel of Generic Templates

chatbot-fb2-image4

Generic template is kind of card that contains Maximum 3 buttons, image, title and subtitle. Carousel is like slider that includes more than two generic templates. Think as developer’s point of view while making some product level chatbot we should not make two functions for sending those responses, so we will try to make some optimization.

Create new case in switch statement,

case "send-carousel" :
  const elements = [{
    "title": "Welcome!",
    "subtitle": "We have the right hat for everyone.We have the right hat for everyone.We have the right hat for everyone.",
    "imageUrl": "https://www.stepforwardmichigan.org/wp-content/uploads/2017/03/step-foward-fb-1200x628-house.jpg",
    "buttons": [
      {
        "postback": "https://f1948e04.ngrok.io",
        "text": "View Website"
      }, {
        "text": "Start Chatting",
        "postback": "PAYLOAD EXAMPLE"
      }
    ]
  }, {
    "title": "Welcome!",
    "imageUrl": "https://www.stepforwardmichigan.org/wp-content/uploads/2017/03/step-foward-fb-1200x628-house.jpg",
    "subtitle": "We have the right hat for everyone.We have the right hat for everyone.We have the right hat for everyone.",
    "buttons": [
      {
        "postback": "https://f1948e04.ngrok.io",
        "text": "View Website"
      }, {
        "text": "Start Chatting",
        "postback": "PAYLOAD EXAMPLE"
      }
    ]
  },{
    "title": "Welcome!",
    "imageUrl": "https://www.stepforwardmichigan.org/wp-content/uploads/2017/03/step-foward-fb-1200x628-house.jpg",
    "subtitle": "We have the right hat for everyone.We have the right hat for everyone.We have the right hat for everyone.",
    "buttons": [
      {
        "postback": "https://f1948e04.ngrok.io",
        "text": "View Website"
      }, {
        "text": "Start Chatting",
        "postback": "PAYLOAD EXAMPLE"
      }
    ]
  }];
  handleCardMessages(elements, sender)
break;

We are sending 3 objects in Array so it will be a carousel if we send one it will be simple Generic template.

Create handleCardMessages() to handle above elements,
async function handleCardMessages(messages, sender) {
  let elements = [];
  for (var m = 0; m < messages.length; m++) {
    let message = messages[m];
    let buttons = [];
    for (var b = 0; b < message.buttons.length; b++) {
      let isLink = message.buttons[b].postback.substring(0, 4) === "http";
      let button;
      if (isLink) {
        button = {
          type: "web_url",
          title: message.buttons[b].text,
          url: message.buttons[b].postback
        };
      } else {
        button = {
          type: "postback",
          title: message.buttons[b].text,
          payload: message.buttons[b].postback
        };
      }
      buttons.push(button);
    }
    let element = {
      title: message.title,
      image_url: message.imageUrl,
      subtitle: message.subtitle,
      buttons: buttons
    };
    elements.push(element);
  }
  await sendGenericMessage(sender, elements);
}

The above function will make payload that facebook API Accepts. Let’s make sendGenericMessage() function to send elements,

const sendGenericMessage = async (recipientId, elements) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: {
          template_type: "generic",
          elements: elements
        }
      }
    }
  };
  await callSendAPI(messageData);
}

callSendAPI() function will send messageData to send API and user will get Generic template or carousel as response.

Conclusion : –

I have shown the way how to send each type of responses to facebook messenger now it’s your turn to think about creative ideas to build chatbot. Fill free to hit comments if you get stuck anywhere.

Chatbot for Facebook Messenger using dialogflow and Node.js: Part3

$
0
0

Overview

Here we are going to build simple facebook messenger chatbot using dialogflow and Node.js. we won’t be going deep into it but we will cover all kind of responses that messenger platform supports like a generic template, receipt, button, media, list and graph. I hope you’ve gone through my second part of this blog Part-2 it includes how to send rich message like video, image and carousel.

Prerequisite

  • Facebook Page
  • Facebook Developer Account
  • Understanding of Dialogflow
  • Knowledge of Node.js

Send List template

chatbot-fb3-image1

The list template is a list of 2-4 elements with an button where button can be optional. Each item may contain a thumbnail image, title, subtitle, and one button.

Create new case in Switch statement,

case "send-list":
    const list = {
        "template_type": "list",
        "top_element_style": "compact",
        "elements": [
            {
                "title": "Classic T-Shirt Collection",
                "subtitle": "See all our colors",
                "image_url": "http://pngimg.com/uploads/tshirt/tshirt_PNG5450.png",
                "buttons": [
                    {
                        "title": "View",
                        "type": "web_url",
                        "url": "https://yudiz-bot.herokuapp.com/collection",
                        "messenger_extensions": true,
                        "webview_height_ratio": "tall",
                        "fallback_url": "https://yudiz-bot.herokuapp.com"
                    }
                ]
            },
            {
                "title": "Classic White T-Shirt",
                "subtitle": "See all our colors",
                "default_action": {
                    "type": "web_url",
                    "url": "https://yudiz-bot.herokuapp.com/view?item=100",
                    "messenger_extensions": false,
                    "webview_height_ratio": "tall"
                }
            },
            {
                "title": "Classic Blue T-Shirt",
                "image_url": "http://pngimg.com/uploads/tshirt/tshirt_PNG5450.png",
                "subtitle": "100% Cotton, 200% Comfortable",
                "default_action": {
                    "type": "web_url",
                    "url": "https://yudiz-bot.herokuapp.com/view?item=101",
                    "messenger_extensions": true,
                    "webview_height_ratio": "tall",
                    "fallback_url": "https://yudiz-bot.herokuapp.com"
                },
                "buttons": [
                    {
                        "title": "Shop Now",
                        "type": "web_url",
                        "url": "https://yudiz-bot.herokuapp.com/shop?item=101",
                        "messenger_extensions": true,
                        "webview_height_ratio": "tall",
                        "fallback_url": "https://yudiz-bot.herokuapp.com"
                    }
                ]
            }
        ],
        "buttons": [
            {
                "title": "View More",
                "type": "postback",
                "payload": "payload"
            }
        ]
    }
    sendListMessege(sender, list)
break;

To send this list Create new function,

const sendListMessege = async (recipientId, elements) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: elements
      }
    }
  };
  await callSendAPI(messageData);
}

callSendAPI() function will send messageData to send API and user will get List as response.

Send Button template

chatbot-fb3-image2

Button template includes text message with up to three attached buttons. We can send three types of buttons that can handle postback, Phone call, and URL.

Create new case in switch statement,

case "send-button":
  const responseText = "exmple buttons";
  const elements = [{
    "type": "web_url",
    "url": "https://f1948e04.ngrok.io",
    "title": "URL",
  }, {
    "type": "postback",
    "title": "POSTBACK",
    "payload": "POSTBACK TEST"
  }, {
    "type": "phone_number",
    "title": "CALL",
    "payload": "+919510733999"
  }]
  sendButtonMessage(sender, responseText, elements)
break;
Create a sendButtonMessage() function to send those elements,
const sendButtonMessage = async (recipientId, text, buttons) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: {
          template_type: "button",
          text: text,
          buttons: buttons
        }
      }
    }
  };
  await callSendAPI(messageData);
}

callSendAPI() function will send messageData to send API and user will get Buttons as response.

Send Open Graph template

chatbot-fb3-image3
The open graph template allows to send open graph URL with optional button. Currently, only sharing songs is supported.

Create new case in Switch Statement,

case "send-graph" :
  var element = [{
    "url": "https://open.spotify.com/album/1XbZ2tMfcQTbVkr55JnoRg",
    "buttons": [
      {
        "type": "web_url",
        "url": "https://en.wikipedia.org/wiki/Rickrolling",
        "title": "View More"
      }
    ]     
  }]  
  sendGraphTemplate(sender,element);
break;

Now create sendGraphTemplate() function to send element,

const sendGraphTemplate = async (recipientId, elements) => {
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: {
          template_type: "open_graph",
          elements: elements
        }
      }
    }
  };
  await callSendAPI(messageData);
}

callSendAPI() function will send messageData to send API and the song will appear in a bubble that allows the user to see album art and preview the song.

Send Receipt template

chatbot-fb3-image4

When you are making chatbot for shopping related apps and when user buys something from your bot you must send receipt at the end of payment, that’s the receipt template facebook provides.

Create new case in switch statement,

case "send-receipt":
    const recipient_name = "Nikhil Savaliya";
    const currency = "INR";
    const payment_method = "Visa 2345";
    const timestamp = 1428444852;
    const elementRec = [{
        "title": "Classic Blue T-Shirt",
        "subtitle": "100% Soft and Luxurious Cotton",
        "quantity": 1,
        "price": 350,
        "currency": "INR",
        "image_url": "http://pngimg.com/uploads/tshirt/tshirt_PNG5450.png"
    }];
    const address = {
        "street_1": "A-6, First Floor",
        "street_2": "Safal Profitaire,",
        "city": "Ahmedabad",
        "postal_code": "380015",
        "state": "Gujarat",
        "country": "IN"
    };
    const summary = {
        "subtotal": 350.00,
        "shipping_cost": 4.95,
        "total_tax": 6.19,
        "total_cost": 361.14
    };
    const adjustments = [
        {
            "name": "New Customer Discount",
            "amount": 20
        },
        {
            "name": "$10 Off Coupon",
            "amount": 10
        }
    ];
    const order_url = "https://37cf1e51.ngrok.io"
    sendReceiptMessage(sender,
        recipient_name,
        currency,
        payment_method,
        timestamp,
        elementRec,
        address,
        summary,
        adjustments,
        order_url);
break;

Create sendReceiptMessage() function to send above elements,

const sendReceiptMessage = async (
  recipientId,
  recipient_name,
  currency,
  payment_method,
  timestamp,
  elements,
  address,
  summary,
  adjustments,
  order_url
 ) => {
  var receiptId = "order" + Math.floor(Math.random() * 1000);
  var messageData = {
    recipient: {
      id: recipientId
    },
    message: {
      attachment: {
        type: "template",
        payload: {
          template_type: "receipt",
          recipient_name: recipient_name,
          order_number: receiptId,
          currency: currency,
          payment_method: payment_method,
          order_url: order_url,
          timestamp: timestamp,
          address: address,
          summary: summary,
          adjustments: adjustments,
          elements: elements,
        }
      }
    }
  };
  await callSendAPI(messageData);
}

By this way you can generate and send receipt to users when they make payment.

Conclusion : –

I have shown the way how to send each type of responses to facebook messenger now it’s your turn to think about creative ideas to build chatbot. Fill free to hit comments if you get stuck anywhere.


Beacon in Android

$
0
0

Overview

A beacon is a small Bluetooth radio transmitter. It simply broadcasts radio signals that are made up of a combination of letters and numbers transmitted on a regular interval of approximately 1/10 of a second. A beacon uses BLE technology

beacon-image-1

What is BLE?

Bluetooth Low Energy (BLE) also called Bluetooth Smart is a wireless personal area network.The BLE design by Bluetooth SIG. The key difference between classic bluetooth and BLE are :

Classic Bluetooth VS BLE

How beacon works?

A beacon simply broadcasts radio signals at every regular interval of time. It’s like a lighthouse instead of emitting visible light, it broadcasts radio signals.

As per image shown beacon simply transmit a radio signals, once device is in its range. The device will be notified, then we can handle this event as per our requirements.

Components of a beacon

  1. Tiny computer with bluetooth smart connectivity.
  2. Battery
  3. Firmware telling the beacon what it should do exactly.
  4. Sensors

What beacon packet contains?

A beacon generally contains three of fields that are

  1. Beacon unique identifiers
  2. Minor
  3. Major

Let’s take small example to understand what is this, Suppose company X that is located in a city and has many of its branches and each branch contains many departments.
Here,
Beacon unique identifier is equal to company X
Major can be compared to a branch
And minor is equal to a department.

Now when your device gets into beacon’s range it will throw information such as company X is connected with Y branch and Z department.

Types of beacons

There are following type of beacons available in market

  1. iBeacon
  2. Eddystone
  3. AltBeacon, and many more.

1. iBeacon :

Beacon format was introduced in December, 2013. Devices implementing iBeacon protocol can only send one type of signal name UUID.

2. Eddystone :

A new beacon format by google introduced in July, 2015. It is open source and is
available in Github. Devices implementing Eddystone protocol can send three types of signals:

  1. UID
  2. URL
  3. TLM

Example

Here is a code snippet of how to scan a beacon in android. Here, I’m using estimote android SDK to scan beacon.

Add dependencies

implementation 'com.estimote:sdk:1.4.1'

Initialize sdk, if you don’t have appId and appToken than pass it blank or null. Create beaconManager object that will help us to connect with beacon. As shown in below code

EstimoteSDK.initialize(context, appId, appToken);
EstimoteSDK.enableDebugLogging(true);
beaconManager = new BeaconManager(this);
beaconManager.setBackgroundScanPeriod(1000, 1000);
beaconManager.connect(new BeaconManager.ServiceReadyCallback() {
   @Override
   public void onServiceReady() {
       Log.d(TAG, "onServiceReady: ");
   }
});
beaconManager.setErrorListener(new BeaconManager.ErrorListener() {
   @Override
   public void onError(Integer errorId) {
       Log.e(TAG, "onError: " + errorId);
   }
});

Now, if you want to scan specific beacon,you need to create a beacon region, in this beacon region you need to pass region name, its UUID, major and minor to its constructor.

beaconManager = ((MyApp) getApplicationContext()).beaconManager;
beaconRegion = new BeaconRegion("monitored region",
       UUID.fromString("B9407F30-F5F8-466E-AFF9-25556B57FE6D"),
       "major", "minor");
beaconManager.setMonitoringListener(new BeaconManager.BeaconMonitoringListener() {
   @Override
   public void onEnteredRegion(BeaconRegion region, List<Beacon> beacons) {
       beaconAdapter.addItems(beacons);
       Log.d(TAG, "onEnteredRegion: ");
   }

   @Override
   public void onExitedRegion(BeaconRegion region) {
       Toast.makeText(MainActivity.this, "onExitedRegion", Toast.LENGTH_SHORT).show();
       Log.d(TAG, "onExitedRegion: " + region);
   }
});

To start and stop specific beacon region.

beaconManager.startMonitoring(beaconRegion);

beaconManager.stopMonitoring(beaconRegion.getIdentifier());

Using beaconManager object, set beacon monitoring listeners. In its listener, there are two overridden methods, onEnteredRegion gives us all nearby scanned beacon list. And another is onExitedRegion which gives us beacon region from where device exists.

@Override
protected void onDestroy() {
   if (beaconManager != null)
       beaconManager.disconnect();
     super.onDestroy();
}

Finally when you don’t want to scan any more beacon you need to disconnect using beacon manager object.

Yudiz portfolio

Here, at Yudiz, we have developed apps wherein beacons are used at restaurants and when user’s device gets into the it range, he gets awarded with offers.

Conclusion

Beacon’s future is bright and one can imagine a wide range of possible applications targeting this technology.

Introduction to Upcoming AndroidX with Material Components

$
0
0

What is AndroidX?

AndroidX (Android Extension library) is the new era of Android Support library which every Android Developer were using since 7+ years. AndroidX supports newer OS features on older versions of Android along with newer device-specific UX, new features under Android KTX, debugging, testing. To keep in mind for smaller and more focused packages, AndroidX redesigns the packages structure such that both the names of Architecture Components and support libraries are simplified. This should help make clear that which dependencies should be included in the APK.

Now the biggest question might arise in every developers’ mind that there are components and packages named “v7” when the minimal SDK level we support is 14! What about that? It’s worth making clear that to work across different versions of Android is to understand the division between APIs that are bundled with the platform and which are static libraries for developers. So developers, it’s time to say “Hello World” to AndroidX.

Migrate from the Support version to AndroidX .

In Android Studio do Refactor -> Refactor to AndroidX.. .This feature is available on Android Studio Canary 14 for app targeting Android P. Android Studio will update that library directly to reference androidx.

To implement high quality application in lesser time, it would be easier for developers to make above changes of AndroidX, but changes by AndroidX will have an impact on existing code and migration takes time, so for that reason Google is also giving parallel updates to android.support for P preview SDK timeframe which will continue the 28.0.0 as a final feature release as android.support.

AndroidX Refactoring

Now to create a new project using androidx-packaged dependency, your new project needs to target API level 28, and you will need to add the following lines to your gradle.properties:

android.enableJetifier=true
android.useAndroidX=true

Some of the package changes from android.support(28.0.0-alpha1) to androidx(1.0.0-alpha1) will look like this: (This might have minor change during alpha phase)

For support library -> android.support.** to androidx.@ ,

For dataBinding library -> android.databinding.** to androidx.databinding.@

For design library -> android.design.** to com.google.android.material.@

For Room database under
Architectural components -> android.arch.persistence.room.** to androidx.room.@

For More Refactoring Refer this

Material Components

All the new announcements made in Google I/O 2018 make a huge change in terms of enhanced with expanded design guidance , tools geared toward solving the lags between design and development, and customisable UI components for not only Android but also for iOS, Web and Flutter. So drop-in replacement of Design Library as a Material Components in 28.0.0-alpha1 or AndroidX and to help you build beautiful digital experiences even faster will be huge change in the world of design and development.

These are few steps to setup environment for material design:

  • Use Android Studio Canary 14 for app targeting Android P for support-v4:28.0.0-alpha1’ or AndroidX.
  • Gradle-wrapper.properties
    distributionUrl=https\://services.gradle.org/distributions/gradle-4.8-all.zip
  • Update build.gradle(project-level)
    classpath 'com.android.tools.build:gradle:3.2.0-alpha11'
  • Update build.gradle (Module level)
    android{
    compileSdkVersion '28' 	
    DefaultConfig{
    ..
    targetSdkVersion 27
    }
    .. 
    }

Under dependency tag

  • If you are using support lib 28.0.0-alpha1
    build.gradle (module level)
    depndencies{
    ..
     api 'com.android.support:design:28.0.0-alpha1'  
    implementation 'com.android.support:support-v4:28.0.0-alpha1' 
    ..
    }
  • If you are using AndroidX then you will replace api ‘com.android.support:design:28.0.0-alpha1’ with implementation ‘androidx.appcompat:appcompat:1.0.0-beta01′ and implementation ‘com.android.support:support-v4:28.0.0-alpha1’ with implementation ‘com.google.android.material:material:1.0.0-beta01’.
    build.gradle (module level)
    depndencies{
    ..
    implementation ‘com.google.android.material:material:1.0.0-beta1’ 
    implementation 'com.android.support:support-v4:28.0.0-alpha1' 
    ..
    }
  • Update App theme
    style.xml
    <resources>
    <style name="AppTheme" parent="Theme.MaterialComponents.Light.NoActionBar">
    <item name=colorPrimary">@color/colorPrimary</item>
    <item name="colorPrimaryDark">@color/colorPrimaryDark</item>
    <item name="colorAccent">@color/colorAccent</item>
    </style>
    </resources>

1. Material TextinputEditText

To create a material text field, add a TextInputLayout to your XML layout and a TextInputEditText as a direct child.

androidx-image1

activity_login.xml

<com.google.android.material.textfield.TextInputLayout
   android:id="@+id/password_text_input"
   style="@style/TextInputLayout"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:hint="Password"
   app:errorEnabled="true">

   <com.google.android.material.textfield.TextInputEditText
       android:id="@+id/password_edit_text"
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:inputType="textPassword" />
</com.google.android.material.textfield.TextInputLayout>

style.xml

<style name="TextInputLayout" parent="Widget.MaterialComponents.TextInputLayout.OutlinedBox">
   <item name="hintTextAppearance">@style/HintText</item>
   <item name="android:paddingBottom">8dp</item>
</style>

To make a outline textField , put a style like

style="Widget.MaterialComponents.TextInputLayout.OutlinedBox"

2. Material Button

androidx-image2

Here, you can use one of these style for Material Button for ex:

style="@style/Widget.MaterialComponents.Button"

Above code will set primaryColor as backgroundColor

style="@style/Widget.MaterialComponents.Button.TextButton"

This will set transparent background to your button.

Another best thing about Material Components are whole material is dynamic. Now, developers don’t need to make a drawable to make button customise. We directly can set cornerRadius, backgroundTint, set icon, iconTint, iconPadding from xml as well as default get Built-in touch feedback (called MDC Ripple) and elevation.

activity_login.xml

<android.support.design.button.MaterialButton
   style="@style/Widget.MaterialComponents.Button"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:text="Messages"
   android:minWidth="200dp"
   app:cornerRadius=”16dp”
   app:icon="@drawable/ic_action_setting" 
   app:cornerRadius="@dimen/_16sdp"
   app:backgroundTint="@color/colorAccent"
   app:iconTint="@color/light_pitch" 
   app:iconPadding="-12dp"app:strokeColor
  />

NOTE: You can access all the examples in our GITHUB link

3. Bottom AppBar And FAB

BottomAppBar is an evolution and one of the defining features from standard Material guidance. It puts more focus on features, increases engagement, and visually anchors the UI.

androidx-image3

act_bottom_appbar_behaviour.xml

<com.google.android.material.bottomappbar.BottomAppBar
..
app:menu="@menu/bottom_appbar_menu_primary"
..
/>

BottomAppbarBehaviour.kt

class BottomAppbarBehaviour:AppCompatActivtiy(){
     Override fun onCreate(savedInstanceState: Bundle?){
      ..
      setSupportActionBar(appbar)
      ..
    }
}

Above code will set menu to the bottomAppbar.

activity_bottom_app_behaviour.xml

<androidx.coordinatorlayout.widget.CoordinatorLayout 
   xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:id="@+id/toolbar"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

   <-- other components an views -->

    <com.google.android.material.bottomappbar.BottomAppBar
        android:id="@+id/appbar"
        style="@style/Widget.MaterialComponents.BottomAppBar"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_gravity="bottom"
        android:backgroundTint="@color/colorAccent"
        app:fabAlignmentMode="center"
        app:fabCradleMargin="5dp"
        app:fabCradleRoundedCornerRadius="15dp"
        app:fabCradleVerticalOffset="5dp"
        app:hideOnScroll="true"
        app:layout_scrollFlags="scroll|enterAlways"
        app:menu="@menu/bottom_appbar_menu_primary"
        app:navigationIcon="@drawable/ic_menu_24dp"
        app:popupTheme="@style/ThemeOverlay.AppCompat.Light"
        app:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" />

    <com.google.android.material.floatingactionbutton.FloatingActionButton
        android:id="@+id/fab_bar"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:src="@drawable/ic_local_florist_black_24dp"
        app:fabCustomSize="50dp"
        app:layout_anchor="@id/appbar" />
</androidx.coordinatorlayout.widget.CoordinatorLayout>

bottom_appbar_menu_primary.xml

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:tools="http://schemas.android.com/tools"
   xmlns:app="http://schemas.android.com/apk/res-auto"
   xmlns:android="http://schemas.android.com/apk/res/android">

   <item
       android:id="@+id/app_bar_search"
       android:icon="@drawable/ic_search_black_24dp"
       android:title="@string/action_search"
       app:showAsAction="ifRoom"/>

</menu>

4. BackDrop Menu

Backdrop is one of the newest features in Material design which is the furthest back surface of an app, appearing behind all other content and components. Backdrop Menu is composed of two surfaces: a back layer (which displays actions and filters) and a front layer (which displays content). This can be used to display interactive information and actions which is navigation or content filters.

androidx-image4

First of all Add Menu:

layout_backdrop.xml

<?xml version="1.0" encoding="utf-8"?>
<merge xmlns:android="http://schemas.android.com/apk/res/android">

   <com.google.android.material.button.MaterialButton
       style="@style/Button"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       android:text="@string/nainital" />

   <com.google.android.material.button.MaterialButton
       style="@style/Button"
       android:textColor="@android:color/white"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       android:text="@string/Manali" />

   <com.google.android.material.button.MaterialButton
       style="@style/Button"
       android:textColor="@android:color/white"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       android:text="@string/camel_safari_jaisalmer" />

   <View
       android:layout_width="56dp"
       android:layout_height="1dp"
       android:layout_margin="16dp"
       android:background="?android:attr/textColorPrimary" />

   <com.google.android.material.button.MaterialButton
       style="@style/Button"
       android:textColor="@android:color/white"
       android:layout_width="wrap_content"
       android:layout_height="wrap_content"
       android:text="@string/kaziranga" />

</merge>

Add this layout file to the layout file of activity

activity_backdrop.xml

<LinearLayout
   style="@style/Backdrop"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:gravity="center_horizontal"
   android:orientation="vertical">

   <include layout="@layout/layout_backdrop" />
</LinearLayout>

Backdrop style will look like:

styles.xml

<style name="Backdrop" parent="">
   <item name="android:background">?attr/colorAccent</item>
</style>

Below section that is responsible for setting up the toolbar.

BackDropActivity.kt

this.setSupportActionBar(app_bar)

Add Motion
The type of motion used must should be eye catching and small because it is applied to repeated actions. The motion is the shape in front moving straight down.We can set this click listener with AccelerateDecelerateInterpolator() and icons for the open and close menu in BackDropActivity.kt onCreate() to provide Tweak motion of front layer.

BackDropActivity.xml

this.setSupportActionBar(app_bar)
app_bar.setNavigationOnClickListener(NavigationIconClickListener(
       this,
       recycler_view,
       AccelerateDecelerateInterpolator(),
       ContextCompat.getDrawable(this, R.drawable.branded_menu),
       ContextCompat.getDrawable(this, R.drawable.close_menu)))

5. Material Chips

androidx-image5

Chip is nothing but a rounded button that consist of a label, an optional chip icon and optional close button. To represent it as a semantic entity text field can replace text with chipDrawable.

<com.google.android.material.chip.Chip
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    app:chipText="@string/hello_world"/>

There are some updated material style which are updated to the latest Material theme that is as below
-> Entry chips :

This style contains optional chip icon, optional close icon and optionally checkable.

style="@style/Widget.MaterialComponents.Chip.Filter"

->Filter chips

This contains an optional chip icon, an optional close icon but difference is always checkable.

style="@style/Widget.MaterialComponents.Chip.Filter"

->Choice chips

This chip is to help users to make a single selection from finite set of options that contains optional chip icon and always checkable.

style="@style/Widget.MaterialComponents.Chip.Choice"

->Action chips

style="@style/Widget.MaterialComponents.Chip.Action"

Chip with updated material theme will give you updated Material Styles to your chips by default.

Conclusion

So Devs!, These are few components of material design using that you can create attractive and beautiful digital experience. You must try your hands on updated Material Components.

To know more about Material design introduced in Google I/O 2018 refer this.

Setting Bundle for Your App

$
0
0

Overview

Hello Friends, in this tutorial we are going to learn about how we can implement Setting Bundle in our project and how we can use it.

iOS, Foundation framework provides the low level mechanism to store preference data. Using Setting Bundle you can configure application level settings. Setting Bundle’s Data is managed by the UserDefault.

Before creating the project let’s discuss the Preference Item and its properties.

1) Group: The group type is for organizing groups of preferences on a single page. The group type does not represent a configurable preference. It simply contains a title string that is displayed immediately before one or more configurable preferences.

Properties:
a) Type: This property specifies the type of the preference. E.g TextField, Title, Toggle Switch, Slider etc
b) Title: This property is used to set the title of the preference.

These two properties are common for all the preference item.

group

2) Title: The title type displays a read-only string value. You can use this type to display read-only preference values.

Properties:
a) Default Value: This property is used to set the default value. E.g Build Version
b) Identifier: This unique identifier is used to save and retrieve the preference value.

title

3) TextField: The text field type displays a title (optional) and an editable text field. It is used to take a input from the User.
The key for this type is PSTextFieldSpecifier.

Properties:
a) Identifier: This unique identifier is used to save and retrive the preference value.
b) TextField Is Secure: This property is used to enter the secure text e.g Password. It has two values: 1. Yes, 2. No
c) KeyboardType: This property is used to set the keyboard type e.g URL, Email Address, Number Pad etc.
d) Auto Capitalization: This property is used to set the capitalization. E.g Sentence, Word, All Character

4) Toggle Switch: The toggle switch is an ON/OFF type button. One can use this to configure a preference where one of two values is required.

Properties:
a) Default Value: Used to set the default toggle as an ON or OFF. It has two values “YES” and “NO”
b) Value for OFF: Used to set the toggle OFF value
c) Value for ON: Used to set the toggle ON value

5) Multi Value: The multi value type lets the user select one value from a list of values. You can use this type for a preference that supports a set of contradictory values. The values can be of any type.

Properties:
a) Titles: This property is used to set the title of the Multi items.
b) Values: This property is used to set the value for the title.

6) Slider: The slider type displays a slider control. You can use this type for a preference that represents a range of values. The value for this type is a real number whose minimum and maximum value you specify.

Properties:
a) Minimum Value: Used to set the Minimum value of the Slider.
b) Maximum Value: Used to set the Maximum value of the Slider.

How to Implement:

1) Configure new project in XCode
2) Press cmd + N
3) Select the setting bundle from the Resource

howtoimaplement-1

howtoimaplement-2

Now we are ready to start the implementation. We are going to take an Input from the user, Reset the app data and display the Build version and Number.

setting-bundle

Make a class which handles the Setting Bundle Data. In the below code, I have implemented the functionality to Reset the Application data and Sets the Build Version and Number.

import Foundation
class SettingsBundleHelper {
    struct SettingsBundleKeys {
        static let Reset = "reset_preference"
        static let BuildVersionKey = "build_preference"
        static let AppVersionKey = "version_preference"
    }

    class func checkAndExecuteSettings() {
        if UserDefaults.standard.bool(forKey: SettingsBundleKeys.Reset) {
//            UserDefaults.standard.set(false, forKey: SettingsBundleKeys.Reset)
            let appDomain: String? = Bundle.main.bundleIdentifier
            UserDefaults.standard.removePersistentDomain(forName: appDomain!)
            //reset userDefaults..
            //CoreDataDataModel().deleteAllData()
            //delete all other user data here..
            UserDefaults.standard.synchronize()
            print(Array(UserDefaults.standard.dictionaryRepresentation().keys).count)
        }
    }

    class func setVersionAndBuildNumber() {
        let version: String = Bundle.main.object(forInfoDictionaryKey: "CFBundleShortVersionString") as! String
        UserDefaults.standard.set(version, forKey: "version_preference")
        let build: String = Bundle.main.object(forInfoDictionaryKey: "CFBundleVersion") as! String
        UserDefaults.standard.set(build, forKey: "build_preference")
    }
}

I have Called this Function from the AppDelegate.

func applicationDidBecomeActive(_ application: UIApplication) {\        SettingsBundleHelper.checkAndExecuteSettings()
        SettingsBundleHelper.setVersionAndBuildNumber()

}

When app becomes Active it will check all time Setting Preference.

How to fetch the data from the Setting Bundle?

To Fetch the User Default data we have to add the Observer and register the UserDefault in our code which notifies that something has been changed in the User Default.

/*!
     -registerDefaults: adds the registrationDictionary to the last item in every search list. This means that after NSUserDefaults has looked for a value in every other valid location, it will look in registered defaults, making them useful as a "fallback" value. Registered defaults are never stored between runs of an application, and are visible only to the application that registers them.

     Default values from Defaults Configuration Files will automatically be registered.
     */
    open func register(defaults registrationDictionary: [String : Any])

  /*!
     NSUserDefaultsDidChangeNotification is posted whenever any user defaults changed within the current process, but is not posted when ubiquitous defaults change, or when an outside process changes defaults. Using key-value observing to register observers for the specific keys of interest will inform you of all updates, regardless of where they're from.
     */
    public class let didChangeNotification: NSNotification.Name

Call addNotificationObserver() and registerSettingsBundle() in ViewDidLoad.

func registerSettingsBundle() {
        let appDefaults = [String:AnyObject]()
        UserDefaults.standard.register(defaults: appDefaults)
    }

func addNotificationObserver() {
        NotificationCenter.default.addObserver(self, selector: #selector(fetchDefaultSettingValues), name: UserDefaults.didChangeNotification, object: nil)
    }

@objc func fetchDefaultSettingValues() {
        sbObjData.name = sbObjData.getUserDefaultStringValue(key: "name_preference")
        sbObjData.password = sbObjData.getUserDefaultStringValue(key: "password_preference")
        sbObjData.reset = sbObjData.getUserDefaultBoolValue(key: "reset_preference")
        let experiance_status     = sbObjData.getUserDefaultStringValue(key: "experience_preference")

        if experiance_status == ExperianceLevel.Beginner.rawValue{
            sbObjData.expertyLevel = .Beginner
        }else if experiance_status == ExperianceLevel.Expert.rawValue{
            sbObjData.expertyLevel =  .Expert
        }else if experiance_status == ExperianceLevel.Master.rawValue{
            sbObjData.expertyLevel =  .Master
        }
        self.tableView.reloadData()
    }

enum ExperianceLevel: String {
    case Beginner = "Beginner"
    case Expert = "Expert"
    case Master = "Master"
}

class SettingBundleData: NSObject {
    var name: String?
    var password: String?
    var reset:Bool = false
    var expertyLevel: ExperianceLevel = .Beginner

    override init() {

    }

    func getUserDefaultStringValue(key: String)-> String {
        return UserDefaults.standard.string(forKey: key) ?? ""
    }

    func getUserDefaultBoolValue(key: String)-> Bool {
       return UserDefaults.standard.bool(forKey: key)
    }
}

Introduction to Slices

$
0
0

Overview

What happens when your work is completed in one click instead of two or more?. The work will be done so easily. Isn’t it quite impressive?

This feature can be achieved through Slices.

Slices are the new concept from which you can embed your application content in other surface like Google Search app. Slices are part of Android P which is a great approach to its user interface. Slices can help user to complete work faster by providing app content to outside surface.

Slices are backward compatible that means it is compatible with Android Kit-Kat and further android version. So it concludes that slices are available for 95% of android devices. Isn’t it a great news?

What actually Slices are?

Slices demonstrate a piece of your app content. Slices are

  1. Templeted
    This means that slices provide rich layout and content system to express your app content in many ways.
    Using templates, you can add image, text, video to slice and make it more visible to user.
  2. Interactive
    Slices does not contain only static data. It contains bundle of variety of components.
    It provides real-time data, deep-links, inline actions, toggle button, sliders and scrolling content.
  3. Updatable
    We can iterate slices frequently by adding more presentable surfaces to expand your app reach.
    You can also add more templates and controls to make slices more engaging and more powerful for users.

Architecture Overview

Slice Provider extend a Content Provider. Slices are based on top of Content URI’s, which means you have wide variety of Slices hosted from your app.

When app wants to show your slice, your app gets a callback to onBindSlice() and you’ll get URI and you’ll get to decide what content you want to connect to that URI and return in that slice.

But, in interactive slice you have to update data in slice and these can be done through notifyChange() method. You send a standard content provider to notify change on your slice URI and whoever who present your slice will get to know that it’s time to update and they’ll give you callback and you can return updated data in response.

Where Slice is going to launch?

Slices are going to launch in search this year, where we are going to use slices for enhance app predictions as the user is searching.

There are two main use cases in terms of how slices will appear

  1. Application Name
  2. General tems

As the user searches with your application name, he will be able to view your application slice to jump over your application.

The other one is general searches like app deep-link or specific features.

Let’s make our first Slice

Before starting to work on slice you have to add necessary slice library. You’ll need to add slice-core and slice-builders to your app’s gradle file.

dependencies {  
    implementation 'androidx.slice:slice-core:1.0.0-alpha1'
    implementation 'androidx.slice:slice-builders:1.0.0-alpha1'
}

To create a slice, you have to make a one class which extend SliceProvider. Each slice contain uri and provider which bind slice with uri.

Slice Provider should be defined in your app manifest file which is responsible to find your slice by other surfaces.

This will handle all required permission internally so that you don’t have to define any permission separately.

<application
        ...

        <!-- To provide slices you must define a slice provider -->
        <provider
            android:authorities="com.android.example.slicecodelab"
            android:name=".MySliceProvider"
            android:exported="true">
        </provider>

        ...
    </application>

Let’s implement class which extends SliceProvider

public class MySliceProvider extends SliceProvider {
    @Override
    public boolean onCreateSliceProvider() {
        return true;
    }

    public Slice onBindSlice(Uri sliceUri) {
        switch(sliceUri.getPath()) {
            case "/temperature":
                return createTemperatureSlice(sliceUri);
        }
        return null;
    }

 }

Now it’s time to build our first Slice
Slice is created with the help of ListBuilder class. You just have to create row which is displayed in slice.You can set title to the slice.

private Slice createTemperatureSlice(Uri sliceUri) {
        // Construct our parent builder
        ListBuilder listBuilder = new ListBuilder(getContext(), sliceUri, ListBuilder.INFINITY);

        // Construct the builder for the row
        ListBuilder.RowBuilder temperatureRow = new  ListBuilder.RowBuilder(listBuilder);                        

        // Set title
        temperatureRow.setTitle(“Temperature”);

        // Add the row to the parent builder
        listBuilder.addRow(temperatureRow);

        // Build the slice
        return listBuilder.build();
    }

To run the application which includes slice, you have to install Slice Viewer. To make slice interactive you have to set action to it so that user can navigate to particular section of your application.

To run application you have to edit some configuration, to do so you have follow following steps.

  1. In your project, select Run > Edit Configurations
  2. In top-left corner, click + button and select Android App
  3. Enter slice in the name field
  4. Select your app module in the Module dropdown
  5. Under Launch Options, select URL from the Launch dropdown
  6. Enter slice- in the URL field
  7. Example: slice-content://com.example.your.sliceuri/path
  8. Click OK.

You can also refer these steps from here.

This is how you can create and run slice application in easy few steps. You can also add various component to your slice to represent it more attractive.

For example you can add temperature increase and decrease icon to your slice like this.

private Slice createTemperatureSlice(Uri sliceUri) {

// Construct our parent builder
        ListBuilder listBuilder = new ListBuilder(getContext(), sliceUri, ListBuilder.INFINITY);

        // Construct the builder for the row
        ListBuilder.RowBuilder temperatureRow = new ListBuilder.RowBuilder(listBuilder);
        // Set title
        temperatureRow.setTitle(“Temperature”);

SliceAction tempUp = new SliceAction(tempIntent,
                IconCompat.createWithResource(getContext(), R.drawable.ic_temp_up),
                "Increase temperature");
        SliceAction tempDown = new SliceAction(tempIntent,
                IconCompat.createWithResource(getContext(), R.drawable.ic_temp_down),
                "Decrease temperature");

        // Add the actions to appear at the end of the row
        temperatureRow.addEndItem(tempDown);
        temperatureRow.addEndItem(tempUp);
        // Set primary action for the row
        temperatureRow.setPrimaryAction(openTempActivity);

        // Add the row to the parent builder
        listBuilder.addRow(temperatureRow);

        // Build the slice
        return listBuilder.build();
}

Conclusion

“A slice is designed to solve a problem: I’m a user and want to get something quickly done on my device,”

A Trip to Android’s Future : Navigation Architecture Component

$
0
0

Overview

trip-to-android-image01

This time Google Developers have introduced many interesting concepts to make native Android development more efficient and faster. One of them is the Navigation Architecture Component, part of Android Jetpack and the new AndroidX package. It makes easy to implement the navigation in your Android app. It totally changes the scenario of navigation between fragments and also suggest to use of single-Activity architecture as the preferred architecture for Android Jetpack.

This architecture also has support for deep links and fragments, which will be create a cleaner, more user friendly experience.

What, Why, and How.
Three common questions arise in everyone’s mind.

trip-to-android-image02

What is Navigation Architecture Component?
Why is it invented ?
How can we implement it in our app?

trip-to-android-image03

  1. The navigation architecture component is here to replace the tedious maintenance of transaction of activities and fragments.
  2. It is invented because handling the activities and fragments via traditional way is critical and lengthy process.
  3. For the answer of the third question we need to dive into to deep info.

Prerequisite for the trip

*Note :For now Navigation Architecture Components is limited to Android Studio 3.2 Canary 14 or higher.

First of all you need to add the navigation fragment and UI libraries to your project. They are available via the google() repository.

Implementation 'android.arch.navigation:navigation-fragment:1.0.0-alpha01'
implementation 'android.arch.navigation:navigation-ui:1.0.0-alpha01'

For Kotlin

Implementation 'android.arch.navigation:navigation-fragment-ktx:1.0.0-alpha01'
implementation 'android.arch.navigation:navigation-ui-ktx:1.0.0-alpha01'

If you want to pass argument as bundle within fragments you need to include the safeargs navigation into build.gradle classpath. This plugin helps to generate code that allows type-safe access to properties used in argument bundles.

buildscript {
    ...
    repositories {
            google()
    }
    dependencies {
            ...
            classpath 'android.arch.navigation:navigation-safe-args-gradle-plugin:1.0.0-alpha01'
    }
}

In your project build.gradle, now you are able to apply the Gradle plugin as you normally do.

apply plugin: 'androidx.navigation.safeargs'

Now it’s time to roll…!

trip-to-android-image04

  1. First of all open project window then right-click on the resource(res) directory and select New from Android resource file.
  2. Mention the name in File name field, such as “navigation_graph”.
  3. Set Navigation from the Resource type drop-down list.
  4. Tap OK. The following occurs:
    • A navigation resource directory will be added within the res directory.
    • A navigation_graph.xml file will be available within the navigation directory.
    • The navigation_graph.xml file will open in the Navigation Editor. This xml file fills your navigation graph.
  5. Tap on Text tab to toggle to the XML text view. The XML for an empty navigation graph looks as shown below:
    <?xml version="1.0" encoding="utf-8"?>
    <navigation xmlns:android="http://schemas.android.com/apk/res/android">
    </navigation>
  6. Tap on Design to get back to the Navigation Editor.

First Hold to look Navigational Editor

trip-to-android-image05

The Navigation Editor’s divisions are:

  1. Destinations list – all destinations are available in the Graph Editor.
  2. Graph Editor – contains visual design of your navigation graph.
  3. Attributes Editor – contains attributes associated with destinations and actions in the navigation graph.

Identify destinations

The first step in creating a navigation graph is to identify the destinations in your app. You can create blank destinations or create destinations from fragments in an existing project.

Notice : The main activity “hosts” the navigation graph. In an app with multiple activity destinations, every other activity hosts its personal navigation graph.The Navigation Architecture Component is here for apps that have single main activity with more than one fragment destinations.

Connecting Fragments :

trip-to-android-image06

Transitions :

trip-to-android-image07

Provide the Fragment Transitions animations for four states

  1. Enter -When entering into next fragment
  2. Exit -When exiting from current fragment
  3. Pop Enter -When entering to previous fragment from current fragment
  4. Pop Exit – When exiting from current fragment to previous fragment

Androidx also provide 4 default animations

  • nav_default_enter_anim
  • nav_default_exit_anim
  • nav_default_pop_enter_anim
  • nav_default_pop_exit_anim

We can also add custom animation to anim folder in resources. It will automatically detect that particular animation into its transitions dropdown

trip-to-android-image08

Pop behaviour :

We can manage the backstack while transferring from one fragment to another fragment

trip-to-android-image09

Launch Option :

We can set the default launch option for fragment with this property

trip-to-android-image10

  • Single Top
    Launch a navigation fragment as a Single Top. Using this, the system holds only one fragment on the top as like activity do.
    trip-to-android-image11
  • Document
    Launch a navigation fragment as a document. Doing this, the system stores fragment document entry for its own use and it will not create the document while transferring to that fragment again.
  • Clear Task
    Launch a navigation fragment as a Clear Task. This will clear the backstack.When moving from one fragment to another fragment, the backstack will be cleared as shown below.trip-to-android-image12

Here is xml view of navigation graph

<?xml version="1.0" encoding="utf-8"?>
<navigation xmlns:android="http://schemas.android.com/apk/res/android"
   xmlns:app="http://schemas.android.com/apk/res-auto"
   xmlns:tools="http://schemas.android.com/tools"
   app:startDestination="@id/firstFragment">

   <fragment
   android:id="@+id/displayFragment"
   android:name="com.example.yudizsolutions.navigationdemo.DisplayFragment"
   android:label="fragment_display"
   tools:layout="@layout/fragment_display" >

       <!--auto generated action when connecting fragments from design we can also add via xml coding-->
       <action
   android:id="@+id/action_displayFragment_to_loginFragment"
   app:destination="@id/loginFragment"
   app:enterAnim="@anim/nav_default_enter_anim"
   app:exitAnim="@anim/nav_default_exit_anim"
   app:launchSingleTop="true"
   app:popEnterAnim="@anim/nav_default_pop_enter_anim"
   app:popExitAnim="@anim/nav_default_pop_exit_anim"
   app:popUpTo="@+id/loginFragment"
   app:popUpToInclusive="true" />

       <!--argument can be also passed from xml-->
       <argument
           android:name="username"
           android:defaultValue="user"
           app:type="string" />

   </fragment>

</navigation>

Conclusion :-

The navigation library change the experience to decouple routing logic. The easy implementation actions and type-safe arguments are best additional functionality for robust API.

Viewing all 595 articles
Browse latest View live