Quantcast
Channel: Yudiz Solutions Ltd.
Viewing all 595 articles
Browse latest View live

Libra : A Financial Game Changer

$
0
0

Overview

Hello crypto lovers, Did you hear the big news? Having dominated the social media over a decade, Facebook now has decided to pave its way into the Cryptocurrency market with a new Blockchain platform – Libra, A Multi Million Dollar project that would create a Global Cryptocurrency. It has been over 10 years since the first Bitcoin was created. Let’s deep dive into the details.

libra_image1

Introduction

Libra, a global currency and financial infrastructure designed to empower billions, was just announced by Facebook. Users will soon be able to use Libra to transact on Messenger, WhatsApp, and Facebook.com.
Libra is fully backed by the “Libra Reverse”, a collection of low-volatility assets denominated in USD, GBP, EUR and JPY.
The not-for-profitLibra Association” was created to manage assets and oversee future developments.
Funding participants of the association includes; Uber, eBay, Visa, Mastercard, Paypal, and Spotify.

libra_image2

Why Libra?

But whyyyyyyyy?

libra_image3

31% of the global population is completely unbanked, with no access to a bank account or mobile money.
$3.7 trillion could be added to the economies of developing countries by 2025 through increased access to digital financial services.
$16 billion a year could be saved by slashing remittance fees by just 5%!
A typical bank-to-bank transfer across borders takes three to five business days to complete.
With cryptocurrencies, this can be INSTANT.
Libra seek to make it simple for everybody to send and get cash simple enough by utilizing our applications to in a flash offer messages and photographs.
To empower this, Facebook is likewise propelling a free auxiliary called Calibra that will construct benefits that lets you send, spend and spare Libra – beginning with an advanced wallet that will be accessible in WhatsApp and Messenger and as an independent application one year from now.

How It Differs From Other Blockchains

Right now, we have bitcoin, ethereum, eos, hyperledger and other blockchain technology with different consensus algorithms. But some of these platforms has transactions scalability problem or some have fully anonymous behaviour as per their consensus algorithms. Lets see,

How it differs?

libra_image4

  1. Like bitcoin, there’s no genuine personality on the Blockchain.
  2. Like Hyperledger, it’s permissioned (at any rate to begin).
  3. Like ethereum, it makes money programmable.
  4. Likewise like ethereum, it considers proof of stake is the future, yet it is additionally not prepared at this point.
  5. Like Binance’s coin, it completes a great deal of consuming.
  6. Like coda, clients don’t have to clutch the entire exchange history.
  7. Like EOS, it hasn’t worked everything out yet.

Stay tuned with us for creating first transaction on Libra by next blog.Thanks.


First Transaction on Libra Blockchain

$
0
0

Overview

Imagine that one of your friends lives across the sea & he/she requires some Libra coin, which you have in abundance. How cool it would be if you were able to send the coins via Facebook or WhatsApp.

Sounds really surprising, right? Just read through the entire blog and you’ll know how.

libra_image1

Recently, Facebook launched the testnet of the Libra Blockchain. Today, we are going to perform our 1st transaction on the testnet.

If you aren’t familiar with the concept, go through our previous blog – Introduction of Libra and you’ll get familiarized. Check out from here.

Lets begin with the steps of executing your first transaction.

libra_image2

Step:1 Clone the Libra Core Repository

Git clone: https://github.com/libra/libra.git

Step:2 Setup Libra Core

cd libra
./scripts/dev_setup.sh

To setup Libra Core, run the setup script to install the dependencies.

The setup script performs these actions:

  1. Installs rustup
  2. Installs rust-toolchain
  3. Installs CMake
  4. Installs protoc
  5. Installs Go

Step:3 Build Libra CLI Client and Connect to the Testnet

./scripts/cli/start_cli_testnet.sh

libra_image3

To connect to a validator node running on the Libra testnet.

Step:4 Check If the CLI Client Is Running on Your System

libra% account

libra_image5

Step:5 Create My Account

libra% account create

libra_image6

#0 is the index of My account.

Step:6 Create My’s Account

libra% account create

libra_image7

#1 is the index for My Friend’s account.

Step:7 (optional) List Accounts

libra% account list

libra_image8

Add Libra Coins to My and My Friend’s Accounts

Step:8 Add Libra to My Account

libra% account mint 0 100

  • 0 is the index of My account.
  • 100 is the amount of Libra to be added to My account.

libra_image9

Step: 9 Add Libra to My Friend’s Account

libra% account mint 1 200

  • 1 is the index of My Friend’s account.
  • 200 is the amount of Libra to be added to My Friend’s account.

libra_image10

Step: 10 Check the Balance

libra% query balance 0

libra_image11

 

 

libra% query balance 1

libra_image12_2

Step: 11 Query the Accounts’ Sequence Numbers

libra% query sequence 0

libra_image13

 

 

 

libra% query sequence 1

libra_image14

 

 

 

A sequence number of 0 for both My and My Friend’s accounts indicates that no transactions from either My or My Friend’s account has been executed so far.

Step: 12 Transfer Money

libra% transfer 0 1 10

  • 0 is the index of My account.
  • 1 is the index of My Friend’s account.
  • 10 is the number of Libra to transfer.

libra_image15

Step: 13 Retrieve the information about the transaction

libra% query txn_acc_seq 0 0 true

libra_image16

Step: 14 Query Sequence Number After Transfer

libra% query sequence 0

libra_image17

 

 

 

libra% query sequence 1

libra_image18

 

 

 

Each time a transaction is sent from an account, the sequence number is increased by 1.

Step: 15 Check the Balance in Both Accounts After Transfer

libra% query balance 0

libra_image19

 

 

libra% query balance 1

libra_image20

 

 

Congratulations!

libra_image21

You have effectively executed your transaction on the Libra testnet and transferred 10 Libra from My account to My Friend’s account!

Demo Video:

Conclusion

So in a nutshell, we have learnt how to do a first transaction on Libra Blockchain. Will be coming up with more examples and detailing in my upcoming blog. So stay tuned & keep reading.

Understanding The Libra Protocol

$
0
0

Overview

Hello Blockchain Enthusiast. Are you excited about the new Blockchain “The Libra”. Today we are going to understand The Libra Protocol. Let’s try to keep it short and simple.
In near future, The Libra Blockchain will serve as a medium of exchange for billions of people around the world. This Blockchain is maintained using Libra Protocol.
If you don’t know much about Libra Blockchain, one of my colleagues has written a very good article on Libra Blockchain. Here it is.

Are you excited guys? Let’s dive deep into The Libra Protocol.

libraprotocol_image1

Libra Protocol contains several components. I will cover the following in this article,

States & Transaction

At any point of time Blockchain has a state. It is also known as ledger state. Ledger state represent current snapshot of the data on the Blockchain. This state is structured as key-values store, which maps account address to account values.
Users of the Libra Blockchain can update the ledger state by submitting the transactions. Transaction consists of transaction script and transaction arguments like recipient address and amount of Libra to send.
Ledger state is not updated until the transaction is committed by the consensus.

Ledger History

Most of the Blockchain maintains a linked list of each block of transactions with the hash of the block. The Libra Protocol uses single merkle tree to provide authenticated data structure.
The Ledger History stores previously executed transaction as well as associated events that is emitted by transaction. The purpose of ledger history is to keep a record of how the latest ledger state was computed. In ledger history, There is no concept of blocks of transactions like Bitcoin and Ethereum.
Any user can query against the ledger history and use the ledger history for the auditing the transaction execution.

libraprotocol_image2

Accounts

Libra have something what Ethereum tries to achieve with account abstraction.

  • Each account has an instance of the standard module called libraAccount.
  • This module stores basic information like balance, sequence number(like nonce in Ethereum) and authentication key.

The Libra protocol does not link any accounts to a real-world identity. A user is free to create as many as accounts by generating multiple key-pairs. Accounts constrained by the same clients have no link to one another.

Transaction Structure

In Libra Blockchain transaction is a signed message that contains the following data:

  • Sender Address: Address of the account who sends the transaction.
  • Senders Publickey: The public key that corresponds to the private key used to sign the transaction.
  • Program: Generally program consists Move bytecode of transaction script.
  • Gas Price: The number of Libra coins that the sender is willing to pay per unit of gas to execute the transaction.
  • Maximum Gas Amount: Maximum Number of gas units that the transaction is allowed to consume.
  • Sequence Number: A number that is equal to the number stored under the sender’s account. Once this transaction executes, sequence number is incremented by one. So only one transaction can be executed for given sequence number.
  • Signature: The digital signature of the sender.

Move Programming Language

Move is a programming language created during the design of the Libra Protocol. Move is used to implement custom transaction and smart contract on Libra Blockchain.
Moves has three important roles in the system,

  1. Enable flexible transaction via transaction script.
  2. Allow user defined smart contract via modules.
  3. Support configuration and extensibility of the Libra Protocol.

libraprotocol_image3

I know it’s a little bit confusing but I’m assuming you got it. Let’s move forward, just a few more to go!!!!!

LibraBFT

The Libra Blockchain uses variant of the Hotstuff consensus protocol called LibraBFT. LibraBFT assumes that 3f+1 votes is distributed among a set of validators that may be honest or Byzantine. LibraBFT stays safe, preventing attacks such as fork and double spends when at most f votes are controlled by Byzantine validators.
LibraBFT maintains safety when validator node crash or restart. Even if all validator node restart at the same time.

Validators (Validator Node)

When users of Libra Blockchain submit transaction, transaction will reach to validator node. Then validator node run consensus protocol, execute transaction and store transaction and it’s execution result in Blockchain. Validator node will decide which transaction will be added to Blockchain and in which order.

libraprotocol_image4

Conclusion

So guys, today we learnt about the different components of The Libra Protocol. There is a lot more to learn on the Libra Blockchain. Stay tuned for the upcoming blogs.

What is KOIN: KOtlin dependency INjection

$
0
0

Overview

Koin is a Dependency injection framework for kotlin. It is written entirely in pure kotlin that’s why it’s so efficient and lightweight and have very good support for android. Are you new to DI? Let’s start from the bottom…

kotlin_image0

What is DI (dependency injection)?

Dependency Injection is a design pattern used to implement inverse of control, meaning the flow of an application is inverted. We can create the dependent object outside of the class and provide those object to class from different ways. DI can help with, moving the creation and binding of the dependent objects outside of the class that depends on them.

kotlin_image1

There are main 4 roles in DI
If you want to use this technique, you need classes that fulfil four basic roles. These are:

  1. The service you want to use.
  2. The client uses the service.
  3. An interface that’s used by the client and implemented by the service.
  4. The injector which makes a service instance and injects it into the client.

I hope you have understood DI and a little bit about koin.

How Koin Works

kotlin_image2

The koin works on simple DSL model. Here we have to create module first. In this module all the dependent objects are there, then we have to load one or more modules in koin. Then, we are ready to use this object. Generally we are loading module in to koin in application class by calling startKoin method,then we can inject the object wherever we want,this how the Koin works.

Koin Vs Dagger

Koin Dagger
Easy to learn and maintain Hard to understand and maintain
Purely written in Kotlin Written in JAVA
Works on DSL module Uses annotation processing
Getting runtime Errors Showing Errors at compile time
Developed by Frech developer Supported by Google
Having a library for ViewModel integration Does not have any special thing to interact with ViewModel
Having a dedicated log on every object creation Does not have any kind of logging feature
Generate less number of line of code Generates more number of line of code then Koin

Setting up Koin

  • Gradle Setup

repositories {
    jcenter()    
}dependencies {
   implementation 'org.koin:koin-android:2.0.1'
}

  • Setup for MVVM extension

repositories {
    jcenter()
}
dependencies {
    // ViewModel for Android
    implementation 'org.koin:koin-android-viewmodel:2.0.1'
    // or ViewModel for AndroidX
    implementation 'org.koin:koin-androidx-viewmodel:2.0.1'
}

Yeee! we have successfully completed the setup

Why to Use Koin?

The very simple answer, to this question, other options are very hard to understand like dagger 2 and having bolipater code, and options like toothpick and many more cannot able to integrate, with ViewModel, Scope Model, Ktor, where as Koin can easily integratable. Koin have it own separate testing module that helps to performing testing.koin uses its own DSL, rather than annotations, Koin DSL Composed of this five things.

  1. Application context
  2. Bean
  3. Factory
  4. Bind
  5. Get

Example

Simple example

  1. We have to create module of dependency that we need to inject

val myModule = module {
single{  BusinessService()}
 }
//class object which is going to injected
class BusinessService() {
    init {
        Log.d("BusinessService", "Created")
    }
    var data = "hello"
}

  1. Now we have successfully created a module now we need to pass it to koin for loading it.

class KoinDemo : Application() {
    override fun onCreate() {
        super.onCreate()
          startKoin {
            androidContext(this@KoinDemo)
            modules(myModule)
            //modules(mySecoundModule)
        }
    }
}

  1. Here we are ready to Inject.

val businessService: BusinessService by inject()
override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        businessService.data
    }

All done successfully, this quite simple, isn’t it?

MVVM example

ViewModel Support

Simple integration is just the tip of the iceberg. There are a lot of features that Koin provides. Most importantly, it supports architecture component’s ViewModel. This feature binds Koin more strongly with the android community, no?
Let’s dive into an example to see its usefulness.

  1. Creating ViewModel Class

class MainActivityVM(var view: MainActivityView) : ViewModel() {
    var userName = ObservableField<String>()
    var password = ObservableField<String>()
    fun onSubmitClick() {
        if (userName.get().equals("yudiz") && password.get().equals("yudiz123")) {
            view.showToast("welcome")
        } else {
            view.showToast("incorrect data")
        }
    }
}

  1. Creating koin module

val mvvmModuleDI = module {
    viewModel { (view: MainActivityView) ->
        MainActivityVM(view)
    }
}

  1. Adding module to Koin

startKoin {
            // Android context
            androidContext(this@KoinDemo)
            modules(mvvmModuleDI)

 }

  1. Injecting viewModel to Activity

class MvvmWithKoin : AppCompatActivity(), MainActivityView {
    override fun showToast(msg: String) {
        Toast.makeText(this, msg, Toast.LENGTH_LONG).show()
    }
    //injecting viewModel
    val viewModel: MainActivityVM by viewModel { parametersOf(this) }
    lateinit var binding: ActivityMvvmWithKoinBinding
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_mvvm_with_koin)
        binding = DataBindingUtil.setContentView(this, R.layout.activity_mvvm_with_koin)
        binding.viewModelKoin = viewModel
    }
}

Conclusion

Koin is as fast as dagger yet is easy to learn, it does not have any boilerplate code and it is a very powerful framework for Kotlin developers to use as dependency injection. So what are you waiting for, just implement it and enjoy 🙂

MICROSERVICE WITH .NET

$
0
0

Overview

Hello devs, Today we would understand the concept of Microservice with .NET Core.

Sounds surprising right? In this blog we will understand what is a microservices and it’s features. But before understanding microservices, first you have to understand “ What is Monolithic?”.

If you want to develop an application, you can develop and publish it on serve, but when you want to scale up you application then you have to go on a cluster environment

But when some errors occur in code, then we can deploy the whole application and this type of application called Monolithic Application

For solving this problem Microservice comes

net_image1

What is Microservices?

Microservices also called Microservice Architecture is a collection of small modules and services. Each service and module are deployed individually and this modules communicate with each other.

Features of Microservices

  • Decoupling
  • Componentization
  • Business Capabilities
  • Autonomy
  • Continuous Delivery
  • Responsibility
  • Decentralized Governance
  • Agility
  • Independent Development
  • Independent Deployment
  • Fault Isolation
  • Mixed Technology Stack
  • Granular Scaling

Microservices with .NET Core

Why Microservices with .NET Core?

  • Cross-platform
  • Consistent across architectures
  • Command-line tools
  • Flexible deployment
  • Compatible
  • Open source

net_image2

Prerequisite for build Microservices with .Net Core

  • Visual studio 2017
  • .NET Core SDK
  • Windows 10 pro for Docker Installation with 64 bit windows
  • Docker for Windows
  • Docker Tools

Creating Application of Microservices with .NET Core

net_image3n

Check the checkbox right hand side panel “Enable Docker Support
And select from dropdown linux after that click Create

Now you want to right click on project name then select “Container Orchestrator Support” option

net_image4n

It will create the Docker-Compose.This docker compose use for running your application.

net_image5

Be careful before running your application, you must start your docker.
In this application you can create the small modules or services and you can run this service individually using docker. Before starting your application you want to check your port is same or not in “Dockerfile” or “docker-compose.yml”

net_image6

net_image7

After that you can run your Application you get the following output

net_image8

Conclusion

So friends, today we learnt about Microservices with .NET Core and How Microservices is working and Why microservices is useful. Also learnt why we use .NET Core with Microservices.

Yudiz Solutions Continues to Deliver as a Top App Development Company in Ahmedabad

$
0
0

Nine years of experience, 2,200 successful projects, 467 clients, 80% client retention, and 200 employees. These are the numbers that make up Yudiz, an application development company based out of Gujarat, India. Our team is committed to delivering the highest quality service for our clients in order to curate life-long partnerships. Taking businesses to the next level is what we specialize in. With the expertise of our developers, we are able to turn your dreams into a reality.

We are well-versed in a wide range of mobile app, game, web, blockchain and chatbot development, as well as UX/UI design techniques. In the past, we have helped our clients redesign websites, develop video games, and create both iOS and Android applications.

Clutch, a B2B ratings and reviews platform, has selected us as among the top app development in Ahmedabad. By analyzing industry data and conducting client interviews, Clutch gave us a 4.8-star rating!

Recently, we had the opportunity to aid JC Web Design with the front and back-end development of a ticket sales platform. We were praised for our effective project management, flexibility, and communication.

yudiz_image1

This website is no doubt one of my best portfolio pieces and has already earned me more business from other customers who have been impressed by it.” – Managing Director of JC Web Design.

Another project we had a lot of fun working on was creating a video game for a PX Kids Interactive. We used our expertise in mobile game development to write the entire codebase. Additionally, we provided artwork and design elements.

The game the team created fit my vision perfectly, and they stayed under budget. I was blown away by the talent of their developers.” – CEO of PX Kids Interactive, Inc.

We receive a perfect, 5.0-star rating for our participation in this project!

yudiz_image2

We are very grateful for our client feedback. These reviews help us enhance our Clutch profile in order to help us attract new clients.

“We appreciate all our clients who have taken time from their busy schedule to review us on Clutch. These reviews mean a lot to us because it strengthens our online presence and also makes us feel proud that our clients are happy with our services and we have successfully delivered them solutions that surpassed their expectations. These reviews act as an online word-of-mouth and makes our prospects feel comfortable in starting a new business relationship with us.” – Kalyan Acharya, Business Development Head.

Not only have we been featured on Clutch, we have earned a listing from The Manifest and Visual Objects, sister sites of Clutch. The Manifest has named us among the top 10 AR companies in India. Visual Objects put us on its list of top web developers in India.

If you need a team to bring your innovative ideas to life, let us help. Contact us today, and let’s see if we’d be a good fit.

NopCommerce with .Net

$
0
0

Overview

Hello Friends, Today we will see something interesting. In today’s world all things happen online meaning if you want to buy something, you just go online and buy. So everything is going online in today’s world. If you are in .NET technology then it’s tough to develop an online store website.If you are in WordPress technology than its easy to handle online store website using woocommerce plugin.
So what we do in .NET technology for developing an online store website?
Develop everything like database and write a code of huge line for developing an online store website?

nop_image1

Answer is NO
We have NopCommerce. NopCommerce works like woocommerce. NopCommerce provides a lot of functionality. So we don’t write code for managing user, order, reports,vendor etc.

nop_image2

Before NopCommerce we can follow the normal approach like make a database, create table, connect database to project , make Store procedures, make API but NopCommerce do all these things for us. NopCommerce is Open source you can download and use it for your project. You can also customize NopCommerce code and design.

Why NopCommerce?

  • Open source
  • Secure
  • Cost Effective
  • Easy to develop and design

Features of NopCommerce

  • Mobile Commerce (Responsive)
  • Multi Store
  • Multi vendor
  • Product features
    • Product attribute
    • Product comparison
    • Stock management
    • Price management
    • Downloadable product
    • Return management
    • Multiple images per product
  • SEO
    • Search engine friendly
    • Sitemap
    • Microdata
    • Localizable URLs
    • Breadcrumbs
    • Google Analytics integration
  • Checkout
  • Marketing
    • Reward Point Features
    • Related Product
    • Discount
    • Coupons
    • Product review and rating
  • Payment method
  • Shipping features
  • Tax features

Can we implement the API integration with NopCommerce?

Yes, We can integrate the API with NopCommerce, we can also change the theme of front and also change the code of NopCommerce.

nop_image3

And NopCommerce also provide admin panel and frontend side. In Admin panel NopCommerce provides a lot of menu.

nop_image4

Conclusion

In this Blog we learn what is NopCommerce, Features of NopCommerce, Why NopCommerce is required.

Unity Cinemachine: A Complete Self Guide

$
0
0

Overview

If you are someone who loves cameras & have shot a lot projects both in games & real life, Cinemachine is specifically designed for you.

Adam Myhill – Head of Cinematics, Unity came up with the idea, to make storytelling fun, fast & iterative, so the users can create shot sequences with the power of procedural cameras. He wanted to establish a relationship between the cameras & their subject so they would act as an army of little robots, ready to follow their instruction and that’s how Cinemachine was born.

That being said, today I hope to share my learnings on how one can make their own Unity Cinemachine Video.

Today, we would be covering the following topics:

Introduction To Unity Cinemachine:

Cinemachine is a complete unity package that consists of steps to make cutscenes, film video, virtual cinematography and more. Cinemachine is also used in-game camera handling without scripting camera behaviors. There are in total 2 different versions available.

1. Cinemachine for Games:

For using the cinemachine, speed of game development is high. It is easy to use for gaming, for example, FPS(first-person shooter) to follow a camera, any story game to make a scene and play a video for that game also supports 2D and 3D platform.

2. Cinemachine for Film And Video:

It’s also easy to use cinemachine to make a film or video. It supports many features of camera motions for example, tracking, dollies, shake, etc. you can change your animation after set up the layout in unity. you can create dynamically adjusts your shots.

How to get started?

  1. Download and install Unity. (2017.1) or higher version (Recommended)
  2. Open or Create a new 3d project.

guide_image1

  1. Go to the Window in package manager.
  2. Search or find the name cinemachine, and install package.

guide_image2

  1. Create an empty Gameobject and name it Timeline.

guide_image3

  1. Open Timeline Window

guide_image4

  1. Select TimelineObject And Click On to Create in Timeline window and save.

guide_image5

  1. Delete timeline animator because we do not need it.

guide_image6

  1. Drag and Drop Object or Character in Timeline for Animation.

guide_image7

  1. Right-click and select animation for your object.

guide_image8

  1. Record Animation If you want.

guide_image9

  1. Now add a Cinemachine brain to your Main Camera

guide_image10

  1. Create Virtual Camera from Cinemachine Window

guide_image11

  1. Select virtual camera and take any shot in the scene window and click to Move to view and align with view of virtual camera.

guide_image12

  1. Drag and drop a focus point in lookat and follow in virtual camera.

guide_image13

  1. Create a second virtual camera from cinemachin window
  2. Create another shot and drag and drop a focuspoint to look at in second virtual camera.

guide_image14

  1. Drag and drop a main camera in timeline window to create a Cinemachin track.

guide_image15

  1. Drag all the virtual camera to an cinemachine track and play.

guide_image16

Made with Unity Cinemachine Video:

Do check the video from the URL below. I have used many features of Unity Cinemachine such as virtual cameras and dolly tracks. Have also used models and animations from mixamo and use unity assets store for the environment.

Demo Video:

Conclusion

I hope this blog proved to be useful!! At least up to an extent where the user understand it’s functioning and its purpose.

Try for yourself. You can show your creativity too on any of the medium. At last, just wanna say that if you liked the blog & it’s content, do share it among your network.


Understanding Isometric Designs With Adobe Illustrator CC

$
0
0

Overview

Today, we are going to design 3D looking 2D Design. From Infographics to Modelling to Magazine Illustrations, the 3D style seems to be everywhere lately. It’s only safe to assume that as per the trend, it’s going to be more popular. Isometric Illustration is a modern & polished way of creating 3D Looking objects look similar to a 2D vector art. Let’s draw an Isometric design in an Adobe llustrator CC to understand in detail.

ISO_IMAGE1

I have covered these topics in my blog:

Why Isometric designs have been trending

The reason why they have been trending nowadays is because they are easy to understand, less cluttered & more detailed. In an Isometric design, we are able to see 3 different sides at a time which produce a fake 3D effect.

Let me know what do you think about my following work.

ISO_IMAGE2

Here you can find the Reference Link.

Let’s have a look at the steps now..

Here are some steps through which you can create Isometric design on your own in Adobe Illustrator CC 2019. Over here, we will see how to draw the isometric design from a Front View of Design.

How to design Isometric grid in illustrator cc

(1) Create Artboard & Open Rulers

ISO_IMAGE3

First, Open the Adobe illustrator CC and create an artboard of w1080 x h1920. To open the rulers click on View > Rulers > Show Rulers.

(2) Adding Guides to the Rulers

ISO_IMAGE4

You will see the rulers on the top and on your left side. Just double click on the top ruler and you can see the guide as above.

(3) Setting Keyboard Increment

ISO_IMAGE5

Now press cmd+K (For mac) or Ctrl+k (For Windows). You will see a popup as above. Set keyboard increment to 30px.

(4) Verifying the Incremented Guide

ISO_IMAGE6

Select Guide and then press option + right arrow key (For Mac) or Alt + right arrow key (For windows). The guide will be copied and paste to the next 30px of increment. Do it until you see guides like this.

(5) Rotate the Guides

ISO_IMAGE7

In this step select the all guides & press cmd + G (For mac )or ctrl + G (For Windows). You can do a right-click to group them. Now press “ R ” And press “ enter ”. You will see a popup of Rotate. Set angle to 120º & click ok.

(6) Copy and Reflect the Rotated Guide

ISO_IMAGE8

Select the guide – press “ O ” and press “ enter ”. It will show you the above popup. Select vertical and press copy.

(7) Base of Isometric Design

ISO_IMAGE9

You will be able to see the above screen. This is a base of Isometric design. In isometric design whatever you draw your design should be aligned with these guides.

How to design a building in Isometric with the use of grid

(8) First Stage of Design – Rooftop

ISO_IMAGE10

It all starts with the alignment of shapes with the Guides. I started my design with the roof of the building. You can do the same or try your own!!

(9) Progressing ahead on the Isometric Design

ISO_IMAGE11

Just see your reference image and try to understand the depth and sides of it. My reference image is not in the isometric view. So all I have to do is to visualize how it could be then just draw it with the alignment of the guides.

(10) Creating Base Design for Doors & Windows

ISO_IMAGE12

I have prepared a base design of the building on which I can put windows, Doors and other detailings. I designed only half of it. Because both sides look similar. I will reflect and copy it. I will show you how.

(11) Making the Base More Live

ISO_IMAGE13

Now you can add windows, doors, etc to make building more alive.

ISO_IMAGE14

(12) Performing the Copy Reflect again

ISO_IMAGE15

Once I am done with my desired design, I select the portion which I want to reflect & press “O” and then press “enter”. It will show you this popup select vertical and copy as we have already done for guides.

ISO_IMAGE16

It will look like this. Now you can add detailing by yourself

Always Start design Flat and Simple, once you clear with your design you can go for detailed work.

(13) Finalizing the Isometric Design & Adding More Depths

ISO_IMAGE17

This is the final result of my design. I add some Depth Effect in the Windows And doors. Set colors and set some light and shadow.

Hope you found my blog educational and inspiring. Stay tuned for more upcoming…

Understanding Chroma Key effect & Learning its use In VFX

$
0
0

Overview

It has been over a few decades since the use of green screens has taken a pace. Initially, this technology was said to be primarily reserved for Hollywood blockbusters and local news stations only but is now utilized by almost all the major YouTubers.

Back in the day, the process used to be quite complex. Things like the optical printers, film strips and more were required for adding a green screen effect. But thanks to the modern-day invention, all it takes now to add a green screen effect is roughly $30, a smartphone & a high graphic video editor.

Wondering why I have only been talking about the green screens? What’s so special about it?

It actually is a visual effect used prominently by the Industry these days known as “Chroma key effect”. If you haven’t encountered with Chroma Key effect before, go through my blog to get more insights.

What is the Chroma Key effect?

A chroma key is an effect that allows you to remove all of one selected.

The Chroma key effect has many names associated with it. It is known as colour keying OR colour separation overlay OR Green-screen OR the Blue-screen effect. This effect allows the user to easily remove the background. Chroma Effect is majorly used in the film industry, news channels, Television programs, studios and advertisement. Mostly Green and Blue colour is used in chroma effect according to film or video background. It can be either done in post-production or during the real-time recording.

VFX_image1
(Green Footage)

VFX_image2
(Output)

VFX_image3

Why only Green/Blue Colour?

The principle behind using the Chroma keying is because of the reason the colour Green/blue holds an opposite colour shade to our skin tone. Black and White colours too are used in chroma effect but only in the rarest of cases. If we used other colours like Red, Orange or any colour tone of them then it will be affected in human skin or clothes or other object which is important in video. That’s why we can not use other colour in chroma effect.

VFX_image4

VFX_image5

List of Software which allows this effect

Mostly all video editors would provide you with this effect & some applications too provide you this effect in mobile. Just to give you a list, here are the famously used software using the Chroma Key effect.

  • Adobe After Effect
  • Adobe Premiere
  • Final Cut Pro X
  • Apple iMovie
  • Filmora
  • Camtasia Video Editor
  • Corel VideoStudio
  • Sony Vegas & more

Surely by now, you would be thinking on how exactly to use the Chroma Key effect?

How to Use Chroma effect?

It is very simple to use. You just need to follow these steps. Here I have used the Adobe Premiere video editor for chroma effect.

Step 1:

First you need a solid green or blue coloured background. Then open Adobe Premiere pro video editor and import your video footage and background according to your project. Then drag and drop footage into the timeline.

Note: Here you need to put your green/blue screen footage above your background footage.

VFX_image6

Step 2:

Then search “Ultra Key” in Effect box. You can also find like this: Effects > Video Effects > Keying > Ultra Key. And apply this effect into your green footage.

VFX_image7

Step 3:

Then you can find the ultra key effect into ‘Effect Control’ box. Here you need to just pick up the background colour of your green footage or blue footage. After picking up the color you can see your output.

That’s it 🙂

VFX_image8

Step 4:

Here you can see your output without green colour background.

VFX_image9

If you are a non-technical person and weren’t able to exactly visualize on how it actually works, refer to the video here.

Hope this blog has helped you to understand the concept of Chroma key effect.

GIF was taken form: https://giphy.com/

UX Research – The Most Vital Tool for a Brand’s Success

$
0
0

“To be a great designer, you need to look a little deeper into how people think and act.”
– Paul Boag, Co-founder of Headscape Limited

But the question that arises in our mind is, “Why is User Research so important in Design”.

It is because it is the most vital component for designing any user experience. Typically carried out at the start of the project it holds different types of Research methodology for gathering data & feedback.

In a nutshell, UX Research means the difference between designing based on guesswork & assumptions & actually creating something that helps to solve the user’s problem. It also helps us to align our product & business strategy with the user’s core needs & goals.

Now, when you look at the IT Companies today focusing on Service Industry, the major challenge that they face is to complete & deliver projects on or before time. And in doing so,
they are not able to devote their key focus on the vital part of the project, which is the UX design.

The only 2 aspects that the client is most interested in is a) Timeline & b) Budget. The ultimate solution is the UX Research. With UX Research the company gets a crystal clear idea regarding the problems that are going to arise in future, so ultimately saving your time that could be more invested in the development process.

For UX Design the first thing very much needed is the right research of users and their needs.

So whenever any project is to be initialized, what we basically need is the right sitemap and the few techniques that can prove helpful.

The next step would be to pick a method that UX research could go with product development.

The renown Methods are:

  1. Card sorting
  2. Desk Research (Secondary Research) &
  3. Testing (Guerilla Testing)

For now, let’s just focus & learn about Card Sorting. The remaining two will discuss later (upcoming blogs)

This is the most cost-effective (Cheapest) method useful for small projects or module development. Sure, it needs the entire team’s participation and also takes some time 🙂 But very useful!!

Let’s get a more clear idea with an example.

Let’s say the Team is working on an e-commerce website design and needs to fix the categories of products.

The Process needs participants who can be clients, Users or team members.

Products are: Jeans, Jacket, Sofa, Bedsheet, Curtains, Camera, Mobile, Headphone which are written on cards

UX_image1
(1_card_sorting)

There are basically two types of card sorting methods.
1.1 Open card sorting
1.2 Closed card sorting

1.1 Open Card Sorting

– In this method participants have to group similar kind of products based on properties and give a specific name to categories

– It may be possible that in this method participants can write the items first and then categorize them.

UX_image2
(2_card_sorting)

UX_image3
(3_card_sorting)

1.2 Closed Card Sorting

– In this method categories are already specified but participates have to put the card into the specific category

UX_image4
(4_card_sorting)

UX_image5
(5_card_sorting)

These methods can be useful as per the need. When the participants arranges the cards and categories you can analyze the user’s behavior and interest.

It primarily helps to:

  1. Build the structure for your website &
  2. Decide what to put on the homepage, Labels, categories, and navigation.

This technique is majorly used across the IT sectors but can be used in any situation.

Hope you got a clear understanding on the Card Sorting Method. That’s it for now.

In the next upcoming blog, we will discuss on the most used method called Desk Research
So stay tuned 🙂

Using the Built-In System Music Player

$
0
0

Overview

Let’s say you have created an app and you want users to be able to access the apple music from your app. Yes, that is possible!!

Today, we will learn how to handle apple music action using our own application like Play, Pause, Remaining time, Total time, Elapsed time, etc. with album art. In short, learn to use all Apple music actions(Next, Previous) using our own created application.

What is System Music Player?

The system music player uses the built-in Music app on your behalf. On instantiation, it takes on the current Music app state, such as the identification of the now-playing item. If a user happens to switch away from your app while the music is still playing, that music will continue playing. The Music app then has your music player’s almost every detail such as most recently-set repeat mode, shuffle mode, playback state, and now-playing item.

Design

MP_image1

Code explained

Code Topics
  • Creating Outlets
  • Import Framework
  • Permission added in infoPlist file
  • Actions for Play, Pause, Next, Previous button

Creating Outlets

@IBOutlet var imageAlbum: UIImageView! // display dong profile image 
@IBOutlet var lblTitle: UILabel! // display artist name and song name
@IBOutlet var lblDuration: UILabel! // display total song duration
@IBOutlet var lblElapsed: UILabel! // display elapsed time
@IBOutlet var lblRemaining: UILabel! // display remaining time

Import Framework

First, you’ll need to import media framework

import MediaPlayer

Create a variable for the media player and timer

let audioPlayer = MPMusicPlayerController.systemMusicPlayer
var timer = Timer()

Permission added in infoPlist file

Now set the permission to access the Apple Music. And set in info.plist

<key>NSAppleEventsUsageDescription</key>

Next we’ll need to add some code to the viewDidLoad function to get things going at startup, starting with getting the media player ready to go:

audioPlayer.prepareToPlay()

Next, you’ll need to set up the timer, this will come in the handle when a song is playing.

self.timer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.timerFired(_:)), userInfo: nil, repeats: true)

Actions for Play, Pause, Next, Previous button

We create some actions for play, pause, next and previous music.

@IBAction func btnNextTapped(_ sender: UIButton) {
     audioPlayer.skipToNextItem()
 }

@IBAction func btnPreviousTapped(_ sender: UIButton) {
     audioPlayer.skipToPreviousItem()
 }

@IBAction func btnPlayPauseTapped(_ sender:  UIButton) {
        btnPlayPauseUpdate()
 }

func btnPlayPauseUpdate() {
        btnPlayPause.isSelected = !btnPlayPause.isSelected
        btnPlayPauseType = btnPlayPause.isSelected ? .play : .pause
        btnPlayPauseType == .play ? startTimer() : stopTimer()
        btnPlayPauseType == .play ? audioPlayer.play() : audioPlayer.pause()
 }

Inside that function is where we are going to update all of the labels as well as get the slider to progress along with the song. Enter the following in the function:

if let currentTrack = MPMusicPlayerController.systemMusicPlayer.nowPlayingItem { 
}

This creates a constant for

MPMusicPlayerController.systemMusicPlayer().nowPlayingItem

so it’s easier to call while also ensuring that it exists before trying to pull the information. We’re going to need to pull a few key things before we can do anything, and the code should be pretty self-explanatory.

let trackArtist = currentTrack.artist ?? ""

let albumImage = currentTrack.artwork?.image(at: imageAlbum.bounds.size)

let trackDuration = currentTrack.playbackDuration

let trackElapsed = audioPlayer.currentPlaybackTime

Now we set the timer’s interval to 1, the information will be updated every 1 second, so the currentPlaybackTime will constantly update as the song plays.

So what do we do with that information? First, let’s go ahead and set the image to the current track’s album artwork, which we’ve already gotten above.

imageAlbum.image = albumImage

We can also set the label above the slider to display the song’s artist and title.

lblTitle.text = "\(trackArtist) - \(trackName)"

Now calculate the total duration of song.

let trackDuration = currentTrack.playbackDuration
//Convert total seconds into minutes
let trackDurationMinutes = Int(trackDuration / 60)
//get last remaining seconds
let trackDurationSeconds = trackDuration.truncatingRemainder(dividingBy: 60)
//convert second into integer
let trackDurationInt = Int(trackDurationSeconds)
//display total duration of song
lblDuration.text = "Length:\(trackDurationMinutes):\(trackDurationInt)"

In Next step calculate the remaining and elapsed time.

// get current playback time
let trackElapsed = audioPlayer.currentPlaybackTime

// convert into minutes and get remaining seconds
let trackElapsedMinutes = Int(trackElapsed / 60)
//get last remaining seconds
let trackElapsedSeconds = trackElapsed.truncatingRemainder(dividingBy: 60)
// Convert into integer track elapsed time
let trackElapsedInt = Int(trackElapsedSeconds)
// Display elapsed time
lblElapsed.text = "Elapsed: \(trackElapsedMinutes):\(trackElapsedInt)"

// calculate last remaining minutes and seconds
let trackRemaining = Int(trackDuration) - Int(trackElapsed)
let trackRemainingMinutes = trackRemaining / 60
let trackRemainingSeconds = trackRemaining % 60
// Display remaining time
lblRemaining.text = "Remaining: \(trackRemainingMinutes):\(trackRemainingSeconds)"

Now run the application and test how it works.

MP_image2

MP_image3

Conclusion

Thus this is how easy it is to have all the apple music controls in your own app. Write back for any queries. Adios!

A User-Interactive way to read OTP in Android

$
0
0

Overview

Welcome visitors, Greetings for the day. Hope you are doing well. As we all know Google is making android app uploading more strict day-by-day, in January 2019 the permission to access unnecessary SMS and call logs has been removed for the non-default apps. However it has introduced a few API to get our job done. Let’s have a look at one such API…

Introduction

The SMS User Consent API allows your app to prompt the consent dialog asking to allow or deny the permission to read specific incoming SMS. After granting the permission, the app can access the whole message body to perform the further SMS verification process. The SMS User Consent API is more user interactive than SMS Retriever API.

When To Use?

The SMS User Consent API does not require any custom format or hashcode in its message body. So If you don’t have control over the SMS body you can use the SMS User Consent API.

As it is recommended by Google, if it is possible we should use SMS Retriever API for SMS verification process because it gives fully automated, more secure and best user experience.

How Does It work?

OTP_image1
credits: Google

To implement the SMS verification we need to interact with both, app-side and server-side.
The SMS User Consent API Listener will listen next five minutes for the new messages after it’s triggering.

OTP_image2

Let’s Buckle Up For A Demo

OTP_image3

1. Creating a new project:

Create a new project in your Android Studio from File ⇒ New Project and select Empty Activity from templates.

2. Adding Dependencies:

Open application level build.gradle file and add required dependencies.

dependencies {
   implementation fileTree(dir: 'libs', include: ['*.jar'])
   implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
   implementation 'androidx.appcompat:appcompat:1.0.2'
   implementation 'androidx.constraintlayout:constraintlayout:1.1.3'  
   // Google Consent API dependency
   implementation 'com.google.android.gms:play-services-auth-api-phone:17.1.0'
   // optional for Phone Selector to get the number
   implementation 'com.google.android.gms:play-services-auth:17.0.0'
}

There is an optional API to integrate Google’s phone number selector API to get number from the device. We are not going to implement it in current example but if you wish to this will guide you here

3. Initializing Consent API:

We have to initialize SMS User Consent API, we can mention the specific sender number of SMS so that the play service will look new message only from that specific number or you can pass null so that it will consider any number would be allowed.

//        starting SMS User Consent API
//        we can also pass specific sender number instead of null.
           val task = SmsRetriever.getClient(context).startSmsUserConsent(SenderNumber or null)

4. Starting to Listen For Incoming Message:

For the next five minutes, when the device receives a new message, the play service will trigger the broadcast with the intent of the prompt which asks permission to read that message.

SmsRetriever.SMS_RETRIEVED_ACTION will be used a handle and respond received Broadcast intent.

The broadcast will be triggered only if the below criteria are fulfilled for the message:

  1. The message body should contain 4-10 characters alphanumeric string with at least one number.
  2. The message should be sent from the phone number that shouldn’t be in the user’s contacts.
  3. If you have specified sender number in code than the service will only listen to messages from that number.

private val SMS_CONSENT_REQUEST = 2  // Set to an unused request code
private var smsVerificationReceiver = object : BroadcastReceiver() {
   override fun onReceive(context: Context, intent: Intent) {
       if (SmsRetriever.SMS_RETRIEVED_ACTION == intent.action) {
           val extras = intent.extras
           val smsRetrieverStatus = extras?.get(SmsRetriever.EXTRA_STATUS) as Status
           when (smsRetrieverStatus.statusCode) {
               CommonStatusCodes.SUCCESS -> {
                   // retrieving the consent intent
                   val consentIntent = extras.getParcelable<Intent>(SmsRetriever.EXTRA_CONSENT_INTENT)
                   try {
                       //activity must be started to show consent dialog within 5 minutes               
                       // otherwise new timeout intent will be received. 
                       startActivityForResult(consentIntent, SMS_CONSENT_REQUEST)
                   } catch (e: ActivityNotFoundException) {
                       Toast.makeText(applicationContext, e.message, Toast.LENGTH_LONG).show()
                   }
               }
               CommonStatusCodes.TIMEOUT -> {
                   // Time out
                   Toast.makeText(applicationContext, "Timeout", Toast.LENGTH_LONG).show()
               }
           }
       }
   }
}

We need to register the Broadcast in our class to get the broadcast from play service. Here SmsRetriever.SMS_RETRIEVED_ACTION will work as Intent Filter.
We have to register and unregister in the activity with respect to its life cycle.

override fun onResume() {
   super.onResume()
   //Registering broadcast receiver to receive broadcast.
   val intentFilter = IntentFilter(SmsRetriever.SMS_RETRIEVED_ACTION)
   registerReceiver(smsVerificationReceiver, intentFilter)
}

override fun onPause() {
   super.onPause()
   unregisterReceiver(smsVerificationReceiver)
}

5. Getting Message:

After asking permission to read that specific message through the prompt Intent, we need to handle the result of it, which will be handled in the onActivityResult method.

public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
   super.onActivityResult(requestCode, resultCode, data)
   when (requestCode) {
       SMS_CONSENT_REQUEST ->
           if (resultCode == Activity.RESULT_OK && data != null) {
// getting message
               val message = data.getStringExtra(SmsRetriever.EXTRA_SMS_MESSAGE)
               Toast.makeText(applicationContext, message, Toast.LENGTH_LONG).show()
               edt_otp.setText(message)
//  here, message variable will contain a full message.
// you to write your logic to fetch the verification code from the message body.
// after getting verification code you can send it to the server or do your further process.
           } else {
               Toast.makeText(applicationContext, "Consent denied, please type manually", Toast.LENGTH_LONG).show()
               //permission denied. User has to type code manually.
           }
   }
}

There will be two possibilities in onActivityResult:

  1. If the user has granted the permission we can get message body from intent, we need to add our own logic to get verification-code from message body, which will be totally dependent on message format.
  2. If the permission is denied than we need to give option to user to type the verification-code manually.

Aaanndd it’s done! Wasn’t is as easy as pie?

OTP_image4

TL;DR

SMS User Consent API is new API to read SMS without asking SMS_READ permission, we don’t need to customize and use hashcode like SMS Retriever API in message body to read it. It is very simple and easy to implement in our projects.

To Know More About SMS Verification API:
https://developers.google.com/identity/sms-retriever/choose-an-api

Progressive Web Apps With Angular: Part 2

$
0
0

Overview

Here we are going to add PWA features to our application Contact Book which we created previously in PART 1 with Angular 7 . We added native features to application like splash screen, offline support and sharing data on any social media. Get a popup to add to Home screen when user sees your application from mobile device.

prog_image1

Install & add angular/pwa

In angular 7, we have a feature to generate PWA files automatically and all its dependency.
PWA version is 0.7.4.

ng add @angular/pwa

Installed packages for tooling via npm.

CREATE ngsw-config.json (392 bytes)
CREATE src/assets/icons/icon-128x128.png (1253 bytes)
CREATE src/assets/icons/icon-144x144.png (1394 bytes)
CREATE src/assets/icons/icon-152x152.png (1427 bytes)
CREATE src/assets/icons/icon-192x192.png (1790 bytes)
CREATE src/assets/icons/icon-384x384.png (3557 bytes)
CREATE src/assets/icons/icon-512x512.png (5008 bytes)
CREATE src/assets/icons/icon-72x72.png (792 bytes)
CREATE src/assets/icons/icon-96x96.png (958 bytes)
CREATE src/manifest.json (1083 bytes)
UPDATE angular.json (3563 bytes)
UPDATE package.json (1388 bytes)
UPDATE src/app/app.module.ts (526 bytes)
UPDATE src/index.html (390 bytes)

The command generates all of the dependencies installed for PWA support, also create default service-worker config file, and default manifest.json file and even default icons for your splash screen and mobile home screen. You can edit these files as per your need.

Now you app is ready to be build.

ng build --prod

Your app with service worker & manifest file is ready to deploy. Service worker cache CSS/SCSS, JS, assets and index.html file. It is only working for production builds because caching Javascript is not working in development mode, where live debugging might be needed.

iOS Support

For Android, it will take a support of manifest file but ios will not support manifest file. We have to add the following meta tag in index.html manually.

<meta name="apple-mobile-web-app-capable" content="yes">
// by default it will take app title
 <meta name="apple-mobile-web-app-title" content="My Contact App">
//by default it will take a screenshoot of app as a logo 
 <meta name="apple-touch-icon" href="assets/icons/icon-192x192.png" sizes="180x180">
 <meta name="apple-touch-icon" href="assets/icons/icon-128x128.png" sizes="120x120">
//by default ios have square logo so in blank space it will take black color
 <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">

Inviting User To Install App

Invite the user to add our application to the home screen, we need the following code in app.component.ts.

ngOnInit(){
if ((navigator as any).standalone === false) {
     this.snackbar.open("You can add this to the Home Screen", '', {
       duration: 3000
     });
   }
if ((navigator as any).standalone == undefined) {
     if (window.matchMedia("(display-mode:browser").matches) {
       // in browser
       window.addEventListener('beforeinstallprompt', event => {
         event.preventDefault();
         const snackbar = this.snackbar.open("Do you want to install this App?", "Install ", {
           duration: 5000
         });
         sb.onAction().subscribe(() => {
           (event as any).prompt();
           (event as any).userChoice.then(result => {
             if (result.outcome == 'dismissed') {
               console.log("Dismissed");
             } else {
               console.log("Installed");
             }
           })
         })
         return false;
       })
     }
   }
}

Service Worker

According to Google,
“A service worker is a script that your browser runs in the background, separate from a web page, opening the door to features that don’t need a web page or user interaction.”

For the earlier version of PWA, we need to add Service worker via npm module. For the newer version, it will automatically be installed. For configuration of a service worker, we have the ngsw-config.json file.

npm install --save service-worker

Offline Support

We need to give our application an offline support. The user can use even if there’s no internet connection. So for that, we need to add a new file in root folder ngsw-manifest.json. In this file, we will add routing for our application in JSON format for offline support.

{
   "routing": {
       "index": "/index.html",
       "routes": {
           "/": {
               "match": "exact"
           },
           "/contact": {
               "match": "prefix"
           }
       }
   },
   "static.ignore": [
       "6\/icones\/.*$"
   ],

Here, we will add external links which are users in our application.

"external": {
       "urls": [
           {
               "url": "https://fonts.googleapis.com/icon?family=Material+Icons"
           },
           {
               "url": "https://fonts.gstatic.com/s/materialicons/v38/flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2"
           }
       ]
   },

Here, we will serve dynamic content & cache in our application.

"dynamic": {
       "group": [
           {
               "name": "api",
               "urls": {
                   "http://localhost:3000/contacts": {
                       "match": "prefix"
                   }
               },
               "cache": {
                   "optimizeFor": "fressness",
                   "networkTimeoutMs": 1000,
                   "maxEntries": 30,
                   "strategy": "lru",
                   "maxAgeMs": 360000000
               }
           }
       ]
   }
}

Update UI When Network Status Change

Now we want to change UI when user goes online and offline for that we need to add the following code in app.component.ts.

updateNetworkStatusUI() {
   if (navigator.onLine) {
     (document.querySelector("body") as any).style = "";
   }
   else {
     (document.querySelector("body") as any).style = "filter:grayscale(1)";
   }
 }

ngOnInit() {
   //check network status
   this.updateNetworkStatusUI();
   window.addEventListener("online", this.updateNetworkStatusUI);
   window.addEventListener("offline", this.updateNetworkStatusUI);
  }

Check Audit

First we need to create a build in production mode by ng build –prod. Then we can check out how our application is going on. All we need to do is open the Chrome DevTools move to Audits tab, where you can find a very powerful tool as Lighthouse— the best diagnostics of web pages. Now press Perform an Audit. it will take some time, then we will get some results. The result looks good. Here we can see the increased PWA score.

prog_image2

Conclusion

Finally, Our app is ready to work in mobile device as Progressive Web Application(PWA) with Angular 7.

See Live Demo: https://contactbookdemo.herokuapp.com/

CameraX

$
0
0

Overview

In Google IO 2019, Google introduced a powerful API to deal with camera tasks, called CameraX. This API is a part of Jetpack. It minimizes camera development time with consistent and easy to use API. It is a simplified API for handling common camera functionality across majority of android devices right from Android 5.0 – Lollipop (API Level 21) to the latest version.

Introduction

Everyone knows that Camera development is not an easy task. When I say camera development is, it means you have to provide a consistent experience across a range of devices. CameraX provide backward compatibility. It works on Lollipop and higher version of Android devices. it is a use-case based approach which is app-lifecycle aware. It comes with three use cases that cover all functionality which is required for Camera App development.

What’s New In CameraX

Backward compatible:

CameraX is backward compatible with Android 5.0 / Lollipop (API 21) and it works on 90% of devices in the market today.

Consistent behaviour:

CameraX provide the same consistency as camera1 via the camera2 API Legacy layer. Uptil now, we had to do a lot of manually code to provide consistency across a variety of devices.

Fixed a lot of issues across the device:
Some of the issue fix in cameraX.

  • Front/Back camera switch crashes
  • Optimised camera closures
  • Orientation incorrect
  • Flash not firing

Here is the typical walkthrough of the camera app architecture.
Uptil now, Camera app used to communicate with the public Camera2 API and the camera2 API handled communication with device HAL (Hardware abstraction layer).

Now we only have to deal with cameraX as it hides all the lower level complex from the developers.

Let’s Do Some Practical

Requirements:

  • Android API level 21
  • Android Architecture Components 1.1.1

Add below Dependencies:

def camerax_version = "1.0.0-alpha03"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"

Add Camera permission in AndroidManifest:

<uses-permission android:name="android.permission.CAMERA" />

CameraX launch with three use-case.

  1. Preview
  2. Image analysis
  3. Image capture

Here we are briefly talking about use-cases:

1. Preview:

Build a preview config using a builder.

val previewConfig = PreviewConfig.Builder().apply {
    	setTargetAspectRatio(Rational(1, 1))
    	setTargetResolution(Size(640, 640))
	}.build()

Configure the preview.

val preview = Preview(previewConfig)

Set PreviewOutListener. What’s going on now when the preview becomes active, It’s going to output a preview output.

preview.setOnPreviewOutputUpdateListener {
				previewOutput: Preview.PreviewOutput? ->
// To update the SurfaceTexture	
}

Next step is just to turn it on and turn off. So binding the preview use-case to activity lifecycle. It means when an activity starts, the preview is going to start and the camera is going to start to stream. And when activity stops, the preview is going to stop and the camera is going to shut down.

CameraX.bindToLifecycle(this as LifecycleOwner, preview)

Note: If you face any issue after adding the line of code mentioned above, you’ll need to add the following dependency as well.

implementation 'androidx.appcompat:appcompat:1.1.0-rc01'

Here instead of activity lifecycle we can also use other lifecycle such as the fragment lifecycle.
There would need to be code for setting up permission, attaching to views, and managing the surface texture.

2. Image analysis:

CameraX provide easy access to the buffers from the camera, so you can perform your own analysis.

Step same as preview use-case to create an Image analysis config object.

val imageAnalysisConfig = ImageAnalysisConfig.Builder()
    	.setTargetResolution(Size(1280, 720))
	.build()

The above step is to request a resolution. Now let’s say your processing requires some minimum resolution for it to succeed. That you’ll specify here.

TheCameraX is going to balance the requests from your application with the device capability. So if you’re able to get the target resolution, you’ll get it Otherwise cameraX would try the next higher resolution, and if failing to get request resolution then cameraX will fall back to a 640 x 480 resolution, which is guaranteed across all devices.

val imageAnalysis = ImageAnalysis(imageAnalysisConfig)
imageAnalysis.setAnalyzer { 
	image: ImageProxy, rotationDegrees: Int ->
   	// write your code here.
}

Image analysers provide all information required for image processing.
We have to bind image analysis use case to lifecycle, so add imageAnalysis to CameraX.bindToLifecycle(…) method.

CameraX.bindToLifecycle(this as LifecycleOwner, preview, imageAnalysis)

3. Image capture:

Image capture allows you to take a high-quality picture with the camera.
Create an Image capture config object.

val imageCaptureConfig = ImageCaptureConfig.Builder()
				.setTargetRotation(windowManager.defaultDisplay.rotation)
   				.build()

		val imageCapture = ImageCapture(imageCaptureConfig)
		CameraX.bindToLifecycle(this as LifecycleOwner, preview, imageAnalysis, imageCapture)

We know getting rotation on devices can be hard, getting portrait mode and landscape mode just right on a variety of devices is harder. But in cameraX this problem is reduced.

Now it is ready to go. Preview will be displayed on screen, the analyzer will be running, and application-ready to take a picture.

But, What about attaching to output?
It simply requires to call tackPicture method on image-capture use case on button click.

fun captureImage(view: View) {
   val file = File(externalMediaDirs.first(), "${System.currentTimeMillis()}.jpg")

   imageCapture.takePicture(file,
       object : ImageCapture.OnImageSavedListener {
           override fun onError(
               error: ImageCapture.UseCaseError,
               message: String,
               exc: Throwable?
           ) {
               val msg = "Photo capture failed: $message"
               Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
               Log.e("CameraXApp", msg)
               exc?.printStackTrace()
           }

           override fun onImageSaved(file: File) {
               val msg = "Photo capture succeeded: ${file.absolutePath}"
               Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
               Log.d("CameraXApp", msg)
           }
       })
}

In this method we need to pass the destination of image where the image will be stored after capturing.

You can also attach a listener to listen success and failure events.

Benefits Of CameraX

Easy to use:
CameraX has made all the three ( Preview, Image analysis, and Image capture) use cases activity lifecycle aware, so you don’t have to start and stop the camera separately anymore.

  • Reduced device-specific testing
  • 75% reduction in lines of code
  • Smaller app size

It has hidden many of the details of Camera2.

  • Opening the camera
  • Creating the session
  • Preparing the correct surfaces
  • Selecting the resolutions
  • and the careful shutdown that you sometimes

Conclusion

Having personally experienced old school camera development on Android, I appreciate the cameraX API. Because of its lifecycle awareness, developer has no need to worry about when to turn the camera on and when to turn it off. Using this API you can make the camera App with a lot less code.


Laravel 6 start-up and new functionalities

$
0
0

Overview

Hello friends, I hope you’re doing well. Here I have brought some initial steps to take one step ahead in the laravel universe. Yes, here I am talking about laravel 6. So without wasting time let’s just get started with the installation process.

Prerequisite

PHP Version 7.2.0 or above

Installation

It’s not new in laravel, it is the same as it was in other versions, but it has a slight change on using default CSS and JS files.

To get a quick start with default laravel auth please follow the below steps.

Steps for default laravel auth

Step 1:

To get a fresh copy of laravel version 6, fire the following command in your terminal.

composer create-project –prefer-dist laravel/laravel project_name

> It’ll create a new project in your system.

NOTE: Remember this command will install laravel version 6 only if you have installed php version >= 7.2.0 in your system.

Step 2:

Before you move for the built-in login and registration process you have to install laravel UI as per the new Laravel version. For that fire the following command.

composer require laravel/ui

Step 3:

php artisan ui vue –auth

> Laravel 6 has removed “php artisan make:auth” command and instead, they design a new command “php artisan ui vue –auth” for a quick start on login and registration setup process.

REMEMBER: Here you’ll not find any “css” or “js” folder inside “public” folder of your project, it was created automatically in previous versions of laravel after firing “php artisan make:auth” command for the login and registration pages and other related pages design.

Now to get a scaffold of CSS and JS file you have to follow the below steps.

Step 4:

Fire the following command in terminal

npm install

> Which will install all the node dependencies in your project.

Step 5:

After installing npm, fire the next command

npm run dev

> As laravel built-in “webpack.min.js” file considers default scaffolding as SASS, you have to run this to make your own scaffolding.

NOTE: After running “npm run dev” command in terminal you’ll find 2 folders inside your public folder. Which will be “css” and “js” and you’ll find the files inside of it.

Now just refresh your login url and you’ll find the designed layout.

Laravel new error page from flare

Generally, we were facing error page something like this in the previous versions of laravel,

laravel_image1

But for now, we won’t get this kind of error page. I mean we have a chance to get a more expressive error page. Which will express us nice design layout as well as it has some very good facility to show us our exact mistake.

For example,

When we define route in our route file as below,

Route::get(‘/test’, ‘TestController@test’);

And when we code in ‘TestController.php’ for “test()” and at this stage inside “test()” you’re returning a view which does not exist like,

public function test() {
	return view(‘test’);
}

Now in general, It’ll go to fetch view file and show you the layout, but when it cannot find and fetch that particular file, it’ll show you an error page, but in laravel 6 error page will come something like this.

laravel_image2

Wooahh !!!! It is really too much awesome thing don’t you think ?!

Anyway, did you notice? In this error page in green background you can see the suggestions as well.

But.. But.. But..

Generally, laravel provides this feature default in your laravel 6 installation. But if you don’t find “facade/ignition” package inside your composer.json file, it means you have to manually install in your project. For this you have to follow the below given steps.

Steps to solve error in browser

Step 1:

composer require facade/ignition

Step 2:

> Add the following code in your app/Exceptions/Handler.php file.

protected function whoopsHandler()
{
    try {
        return app(\Whoops\Handler\HandlerInterface::class);
    } catch (\Illuminate\Contracts\Container\BindingResolutionException $e) {
        return parent::whoopsHandler();
    }
}

Step 3:

> To get configuration files for this installed package. You should fire the following command.

php artisan vendor:publish --provider="Facade\Ignition\IgnitionServiceProvider" --tag="config"

> Now see in “config” folder. You’ll find two new files

  1. flare.php
  2. ignition.php

In “ignition.php” file you can set your editor name and error page theme as I have set it to light right now. There might be so many functionalities to change for. By default, it’ll give you editor name as phpstorm and theme as light.

Anyway, Let’s go for the next step.

Step 5:

> In this step, you have to add or update some value of file config/logging.php.

In your ‘stack’ array, add one more value to the “channels” key and the value is ‘flare’. So your stack array would look like this now.

'stack' => [
      'driver' => 'stack',
      'channels' => ['daily', 'flare'],
      'ignore_exceptions' => false,
 ],

Okay so let’s go ahead and do one more change to the same file. Now what you have to do is to add flare array in main ‘channels’ array after ‘errorlog’.

'flare' => [
     'driver' => 'flare',
],

Okay done!!. Now no more changes require. You can refresh your error page and see the effect. If you cannot then clear the project cache or fire php artisan config:cache in your terminal.

Well adding this facade/ignition package will be helpful to any of the laravel version you’re working with.

Okay, so here we have learned about how to see error page in good designed and expressive layout. Now what we’ll learn is how to edit and resolve an error from browser’s error page.

Let me show one image so that you can get the basic idea about how we can edit and resolve our error from error page itself.

laravel_image3

Okay so as you can see the picture of my error page now. Let me tell you first I have not imported App\User model, so that they are suggesting me to import package and get rid of error. But I have highlighted an edit pencil in my screenshot. So it’s an indication that you can edit it from the error page. And save it using command + S in MAC and in windows CTRL + S.

But before we resolve an error let me tell you that we have to install a package for it.

composer require facade/ignition-code-editor –dev

Okay so after installing above package your error page will be like online editor.

laravel_image4

You can edit as I show you in the above screenshot. After changes, just save it via COMMAND + S / CTRL + S. You can also see the same changes in your main editor file.

So that was it.

BTW, there is one more new interesting feature and that is about Lazy collections. I will talk about it in my upcoming blog.

Cheers!! 🥂

Apple Event Keynote

$
0
0

Overview

Bringing to you all about Apple’s annual event with the tagline “By innovation only”.

Apple Arcade

apple_image1

Apple introduced the new service in the game area. This is a huge step taken by apple specific for the game lovers. Apple added a new tab in the app store apps which listed games under the Arcade menu. Also, Apple tie-up or partnerships with many game vendors to add more & more games in the future.

Apple Arcade will charge $4.99 per month & enable the family sharing option up to 6 members. You can use as a free trial as well. Available on September 19th.

Apple TV+

apple_image2

Another big service announced is Apple TV+. We already had a little bit of an idea before the event. The most lovely thing is 1 year included if we’ll buy any apple device like Mac, Apple watch etc.. Apple TV+ will launch on November 1st for $4.99 per month. We can use it for a 1 week trial period. Initially, Apple TV+ launch with 5 Original TV shows. Just amazing!!

iPad

As we all know the new iPad just launch in the event has many features. Let’s drive deep to know more about it.

First, The size of the iPad is 10.2 inch with Retina Display as it’s common. The more exciting is iPadOS 13 which has mind-blowing features. It helps people in all areas like gaming, business, entertainment etc.. The New iPad has apple pencil support & floating one hand keyboard.

According to me, iPad have A10 Fusion process. I expected it should have at least A12 Bionic. By doing it might affect the price of an iPad.
The new iPad Price starting with $329 & for education $299.

Apple Watch

apple_image3

Apple announced the most popular smartwatch series 5.

  • Always-on technology. Apple watch integrated new display hardware with an ambient light sensor and watchOS software.
  • Compass and current elevation
  • Swimproof & other existing features of series 4.

The Apple Watch Series 5 is available in a wider range of materials, including aluminum, stainless steel, ceramic and an all-new titanium.

The Pricing starts at $399 for the aluminum model and $1400 for the ceramic Apple.
Watch Edition. Apple also launched the Apple Watch Studio this year where you can customize your apple watch your way i.e choose the straps and casing that you love. How amazing!

iPhone 11 & 11 Pro

apple_image4

apple_image5

The primary focus of the Apple event is iPhone lineup with camera improvement.

For the first time, Apple brings the Night mode feature in camera.
QuickTake is a new video recording feature that makes it easier to take videos by long-pressing on the camera shutter button in iOS 13.
iPhone 11 has Slow-motion selfie mode for the first time in front camera. Apple likes to call it Slofies.

There are 3 new models introduced.

iPhone 11
  • Powered by new A13 Bionic chip faster processor ever made before.
  • Dual 12MP camera (Ultra Wide and Wide cameras)
  • One hour longer battery life than iPhone XR.
  • Six colors variant: black, white, red, purple, green, and yellow.
  • The Price starts from $699 for 64GB of storage, which one is lower than iPhone XR.
iPhone 11 Pro & Pro Max

As expected, it comes with a triple-camera setup on the rear.

  • Camera Includes a telephoto lens, a wide-angle lens, and a super wide angle lens.
  • The iPhone11 Pro offers up to four hours longer battery life than the iPhone XS.
  • The iPhone 11 Pro Max offers up to five hours longer battery life than the iPhone XS Max.

The new iPhones also feature a matte finish, as opposed to the glossy glass finish of the iPhone XS.

The iPhone 11 Pro starts at $999 for 64 GB of storage. The iPhone 11 Pro Max starts at $1,099 for 64 GB.

To see more specification please visit:
https://www.apple.com/apple-events/september-2019/

Conclusion

Honestly, everyone all over the world had high hopes from Apple this year. Did you not hear the crazy rumours flying all around before the event? But with all these buzz lets see if it helps Apple with the market. We’ll find out in the long run. That’s all for this year’s event. Thank you.

Morph Motion

$
0
0

Overview

Hello Animators, let’s get some familiarity with Morph Motion. We all know about Motion Graphics and it’s transitions but nowadays this type of transitions are originated with Morphing. This special effect is used in motion film or animation to create more realistic transitions between objects or shapes. Morph means one type of changing the point of shape or one way to move from one shape to another shape. Morphing is a transition between two shapes or two frames.

Concept of Morph

Morphing is a special effect in Motion Graphics and animations that morph one image or shape into another through a transition. Morphing is one of the cross-fading techniques which apply to the Motion scene.

Another word for Morphing is twining. It is the process of generating intermediate frames between two images or two frames, which makes it look like a second image is slowly take part in the first image.

morph_image1
It is a George W.Bush(43rd president of the United States) and Arnold Schwarzenegger(American actor, Politician) morph image

We create motion between Text, Logo, Shape through Morphing. After Effect is a platform or tool that can create morph between footage keyframes. Morphing techniques are classified into two parts. They are Mesh-based methods and Feature-based methods. In mesh-based methods, the images are declared by a solid vertex shape. In Feature-based methods, the images are as lines or a set of points. Now-a-days Feature-based methods are popular.

Here is an example of morphing in circle, square and triangle with bounce motion. So it is a simple type of shape morphing in after effect. This is a mesh-based method in morphing. In Mesh-based morphing, first the edge point of all shape path you used are decided. Then make keyframe of all these shape edge path and all key paste in one of our shapes in path keyframe. Set all keyframes to the different time with the same gap, make all keyframes to easy ease. See the same shine on it so apply the Glow Effect on the shape.

morph_image2

How to create Morphing

Here we have created the Morph Effect between two texts. So let’s do this…

Step 1:

Make new after effect project of 1080×1080 px composition.

Step 2:

Write two letters or text that you want to morph.

Step 3:

Then right-click on the text layer and “create shape from text” layer for both.

Step 4:

Now it generates outlines layer in the timeline. Find both outlines path and create a keyframe of it for all letters.

Step 5:

Copy all keyframes one by one in another outline letter path.

Step 6:

Set all keyframes to a different time with the same Time gap.

Step 7:

For a better view of animation movement retouch letters first vertex point. So animation starts at that point of the edge.

Step 8:

Make all keyframes easy ease.

So this is our text morphing…

morph_image3

morph_image4

I have some additional touch on font shadow, reflection, color tone for a better view and smoothness of morphing.

Where we use Morphing

Morphing is used in titles, advertisements, logo animation, story animation. Here I have created animation between google’s logo and google assistant’s logo. Like this, we can implement Morphing on Motion Graphics.

morph_image5

Conclusion

Morphing considers all the elements as a shape of edge. So if you create more stylish motion then you need masking in your morphing. I hope this will help you to understand the concept of morph motion.

Apple U1 Chip

$
0
0

Overview

With announcing the new iPhone 11, Apple also launched the U1 chip. Apple didn’t mention anything about U1 chip in the keynote. What will the Chip do? Well, Chip U1 will provide new capabilities like the accurate version of airdrop in the new iPhone model with iOS 13.

What is the U1 chip?

The U1 chip uses Ultra-Wideband(UWB) technology that allows the new iPhone to locate and communicate with other U1 equipped devices. “U” in U1 means “ultra-wideband”. Ultra-wideband is low energy, short-range radio technology used for wireless transmission.

What could iPhone 11’s U1 chip do?

u1_image1

Apple is the first company to use Ultra-Wideband(UWB) technology in smartphones. The U1 chip mainly used in Airdrop as Apple said “the latest directional version of Airdrop”. That shows another U1 equipped device’s direction and distance using radio waves. Apple is also merging Find My Friend and Find My iPhone into a single app “Find My” in iOS 13.

Difference between Bluetooth and U1(UWB) chip

u1_image2

Bluetooth also does the same thing as UWB in the existing Tile tracker. This tracker works by measuring signal strength. Bluetooth is inexplicit and insecure as compared to UWB. UWB uses time-of-flight calculation for measure distance. UWB can also give the direction to another device, for example, the device is 2 feet away to the left. In the future, the new trackers will be using this latest U1 chip.

Devices with U1 chip

iPhone 11, iPhone 11 Pro and iPhone 11 Pro max are all new devices that have U1(Ultra Wideband) chip.

Conclusion

U1 chip may open many possibilities in functionality like AR, Home Automation, Location-based apps. That’s just awesome!!. It makes the app more powerful & reliable by utilizing the U1 chip.

Clutch Awards: Yudiz Solutions Named Top Mobile App Developer In The Gaming Industry

$
0
0

clutch_image1

At Yudiz Solutions, our team prides themselves on the ability to imagine. We love when our creators and developers get lost in the magic of the gaming world. This approach keeps our apps interesting, and it’s the reason why we can transform our client’s ideas into reality with our mobile app solutions.

Recently, Clutch published their picks for Industry Leaders in the gaming industry, and we are proud to announce that Yudiz Solutions has been named a leader! Clutch is a B2B ratings and reviews platform that highlights leaders by location or industry. This gives everyone the chance to be seen in the industry, so the companies with projects that are looking for their ideal match don’t have to rely on who paid the most in SEO to be listed first. It’s all about the work, and we genuinely care about the work.

These awards are reviews based, so our client’s opinions really do mean a lot to us. It’s inspiring to see that many of our previous clients really enjoyed our partnerships together.

clutch_image2

It’s an interesting time in the gaming industry. According to Capcom, an estimated 25 billion dollars’ worth of gaming products will be purchased in 2019. That’s a lot of people picking Luigi in Mario Kart and even more fitting shapes together on Tetris.

Stay in the know about current industry guidelines in the gaming sphere on The Manifest, a business resource for companies looking for quality information that’s useful to their business success. And to see the work that we’re so proud of making, check out our digital portfolio on Visual Objects. Visual Objects is a Clutch site that is a resource for businesses looking for partners.

We’re excited to see where the industry takes us and even lead the way for a few companies just starting out. We’re thankful to our clients for the reviews.

We’re also awarded as a development leader by Clutch, know more about it on our blog.

Be sure to stay updated on the Yudiz Solutions blog.

You can also book time with our team to discuss a potential development project.

Viewing all 595 articles
Browse latest View live