Google Adds Kotlin as an Official Programming Language For Android Development

On 17 May 2017, at the Google I/O keynote, Google’s Android Team announced first-class support for Kotlin. It is a new programming language built by JetBrains, Which also develop IDEs for languages like Java, Python etc.

However, JetBrains also develops the Android Studio — Google’s official developer tool. It’s a language that runs on the Java Virtual Machine, in the nutshell Android don’t use Java Virtual Machine but Java has a strong root. Its compatibility with Java makes it a popular choice for Android developers.

According to reports, Kotlin tools will ba available by default in upcoming Android Studio 3.0. Google plans to give language a long-term support for Android. It includes support for a number of features that Java itself doesn’t currently support.

Google says that It is “a brilliantly designed, mature language that we believe will make Android development faster and more fun.”

Readers who want try feature like “Convert Java File to Kotlin File” can go ahead and try Android Studio 3.0 preview using this link.  Read More Details on kotlinlang’s blog.

Do you think Kotlin can beat Java in Android Development? Share your views in comments.

Share This:

Top 5 Programming Languages For Artificial Intelligence (AI)

Artificial intelligence

AI programs have been written in just about every language ever created. The most common seem to be Lisp, Prolog, C/C++, recently Java, and even more recently, Python.

Programming Languages For (AI):

1. LISP:

In the 1970s and 1980s, Lisp was the best developed and most widely used language that offered the following set of features:

  1. Easy dynamic creation of new objects, with automatic garbage collection,
  2. A library of collection types, including dynamically-sized lists and hashtables,
  3. A development cycle that allows interactive evaluation of expressions and re-compilation of functions or files while the program is running,
  4. Well-developed compilers that could generate efficient code,
  5. A macro system that let developers create a domain-specific level of abstraction on which to build the next level.

These five features are valuable for programming in general, but especially for exploratory problems where the solution is not clear at the onset; thus Lisp was a great choice for AI research.  Over the years, these features started migrating into other languages, and Lisp no longer had a unique position; today, (5) is the only remaining feature in which Lisp excels compared to other languages.

2. PROLOG:

Prolog is a high-level programming language based on formal logic. Unlike traditional programming languages that are based on performing sequences of commands, Prolog is based on defining and then solving logical formulas. Prolog is sometimes called a declarative language or a rule-based language because its programs consist of a list of facts and rules. Prolog is used widely for artificial intelligence applications, particularly expert systems.

3. Java:

Java uses several ideas from Lisp, most notably garbage collection. Its portability makes it desirable for just about any application, and it has a decent set of built-in types. Java is still not as high-level as Lisp or Prolog, and not as fast as C.

4. Python:

Python is the preferred choice of many to start with artificial intelligence because Python is one of the easiest and the fastest programming language out there, Mostly AI developers suggest Python for Artificial Intelligence development.

5. Haskell AI:

Most of the major algorithms are already available via cabal. Additionally, Haskell has CUDA bindings and is compiled to bytecode, and because it’s stateless and functional, programs can easily be executed on multiple CPUs in the cloud. So overall it’s an excellent language for AI development.

Did you find this article helpful? Don’t forget to give your feedback in comments!

Share This:

Google’s AutoDraw

AutoDraw: Google’s new Artificial intelligence experiment which pairs machine learning to help draw anything fast. Basically, a suggestion tool which reckons perfection for whatever shapes, figures, or anything ugly drawn by you.

May Google’s AutoDraw, be just a Drawing tool for us but it can help aesthetically impaired individuals who couldn’t doodle out of themselves.

AutoDraw is an example of machine learning, compares with the doodles we draw which in turn helps in teaching the AI’s neural network in predicting what different shapes and figures it represents in return suggesting the perfect drawing to our rough and lousy sketches.

Perfection is what you shouldn’t expect Afterall it’s AI it learns itself and suggests the best it can.

AutoDraw being a web-based program which supports working on all platforms may it be a laptop or a smartphone. It offers a nice and clean canvas to be played with.

With primary features To Draw, Type, Fill color, etc. Also, has a menu which contains features to draw with a new page, with different ratios of canvases. Also has Downloads and share options in it.

Google made this tool free and accessible to use for the general public. and it’s surely taking off, which is why they made it free.

Now check this link to start doodling 🙂 — Auto draw

Share This:

Google DeepMind Open-Sources Sonnet Library Now You Can Build Complex Neural Networks

Google DeepMind releases “Sonnet framework Library” by Google DeepMind. Sonnet Library contains 97.0% Python, 2.9% C++ and 0.1% Shell code in framework. Main motive to this release is to help other developers and researchers to build complex neural networks quickly and easily

DeepMind Open Source

Ater launching Google’s own open-source website, Now Google DeepMind open-sourcing its Sonnet framework library. Sonnet framework was developed by Google DeepMind developers using TensorFlow for quickly building neural network modules.

For Those who don’t know about Google DeepMind, Google DeepMind is the most advanced and powerful machine learning based Artificial Intelligence computer. Beating Chinese game Go’s world champion is one of the biggest achievement of DeepMind. Research of Go game known as AlphaGo.

By open-sourcing Sonnet DeepMind wants to expand the community of DeepMind, the main motive behind this release is that DeepMind developers want to help other to make complex neural networks for their projects/researches.

Sonnet uses an object-oriented approach, similar to Torch/NN, allowing modules to be created which define the forward pass of some computation, said DeepMind in their blog post.

Modules are ‘called’ with some input Tensors, which adds ops to the Graph and returns output Tensors”, Bringing transparently to variable sharing is achieved by automatically reusing variables on subsequent calls to the same module.

DeepMind have made changes to core TF to consider models as hierarchies, This will make easier for users to switching between modules while doing experiments.

Sonnet is available on Github Repository, DeepMind has also published a paper in which it’s describing the initial version of Sonnet.

This is not a one-time release, There are many more releases to come, and they’ll all be shared on our new Open Source page said DeepMind.

Share This:

Success stories applying data mining

What can data mining do?

Data mining is primarily used today by companies with a strong consumer focus – retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among “internal” factors such as price, product positioning, or staff skills, and “external” factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to “drill down” into summary information to view detail transactional data.

With data mining, a retailer could use point-of-sale records of customer purchases to send targeted promotions based on an individual’s purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to appeal to specific customer segments.

For example, Blockbuster Entertainment mines its video rental history database to recommend rentals to individual customers. American Express can suggest products to its cardholders based on analysis of their monthly expenditures.

WalMart is pioneering massive data mining to transform its supplier relationships. WalMart captures point-of-sale transactions from over 2,900 stores in 6 countries and continuously transmits this data to its massive 7.5 terabyte Teradata data warehouse. WalMart allows more than 3,500 suppliers, to access data on their products and perform data analyses. These suppliers use this data to identify customer buying patterns at the store display level. They use this information to manage local store inventory and identify new merchandising opportunities. In 1995, WalMart computers processed over 1 million complex data queries.

The National Basketball Association (NBA) is exploring a data mining application that can be used in conjunction with image recordings of basketball games. The Advanced Scout software analyzes the movements of players to help coaches orchestrate plays and strategies. For example, an analysis of the play-by-play sheet of the game played between the New York Knicks and the Cleveland Cavaliers on January 6, 1995 reveals that when Mark Price played the Guard position, John Williams attempted four jump shots and made each one! Advanced Scout not only finds this pattern, but explains that it is interesting because it differs considerably from the average shooting percentage of 49.30% for the Cavaliers during that game.

By using the NBA universal clock, a coach can automatically bring up the video clips showing each of the jump shots attempted by Williams with Price on the floor, without needing to comb through hours of video footage. Those clips show a very successful pick-and-roll play in which Price draws the Knick’s defense and then finds Williams for an open jump shot.

How does data mining work?

While large-scale information technology has been evolving separate transaction and analytical systems, data mining provides the link between the two. Data mining software analyzes relationships and patterns in stored transaction data based on open-ended user queries. Several types of analytical software are available: statistical, machine learning, and neural networks. Generally, any of four types of relationships are sought:

  • Classes: Stored data is used to locate data in predetermined groups. For example, a restaurant chain could mine customer purchase data to determine when customers visit and what they typically order. This information could be used to increase traffic by having daily specials.
  • Clusters: Data items are grouped according to logical relationships or consumer preferences. For example, data can be mined to identify market segments or consumer affinities.
  • Associations: Data can be mined to identify associations. The beer-diaper example is an example of associative mining.
  • Sequential patterns: Data is mined to anticipate behavior patterns and trends. For example, an outdoor equipment retailer could predict the likelihood of a backpack being purchased based on a consumer’s purchase of sleeping bags and hiking shoes.

Data mining consists of five major elements:

  • Extract, transform, and load transaction data onto the data warehouse system.
  • Store and manage the data in a multidimensional database system.
  • Provide data access to business analysts and information technology professionals.
  • Analyze the data by application software.
  • Present the data in a useful format, such as a graph or table.

Different levels of analysis are available:

  • Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.
  • Genetic algorithms: Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of natural evolution.
  • Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID) . CART and CHAID are decision tree techniques used for classification of a dataset. They provide a set of rules that you can apply to a new (unclassified) dataset to predict which records will have a given outcome. CART segments a dataset by creating 2-way splits while CHAID segments using chi square tests to create multi-way splits. CART typically requires less data preparation than CHAID.
  • Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.
  • Rule induction: The extraction of useful if-then rules from data based on statistical significance.
  • Data visualization: The visual interpretation of complex relationships in multidimensional data. Graphics tools are used to illustrate data relationships.

 

What technological infrastructure is required?

Today, data mining applications are available on all size systems for mainframe, client/server, and PC platforms. System prices range from several thousand dollars for the smallest applications up to $1 million a terabyte for the largest. Enterprise-wide applications generally range in size from 10 gigabytes to over 11 terabytes. NCR has the capacity to deliver applications exceeding 100 terabytes. There are two critical technological drivers:

  • Size of the database: the more data being processed and maintained, the more powerful the system required.
  • Query complexity: the more complex the queries and the greater the number of queries being processed, the more powerful the system required.

Relational database storage and management technology is adequate for many data mining applications less than 50 gigabytes. However, this infrastructure needs to be significantly enhanced to support larger applications. Some vendors have added extensive indexing capabilities to improve query performance. Others use new hardware architectures such as Massively Parallel Processors (MPP) to achieve order-of-magnitude improvements in query time. For example, MPP systems from NCR link hundreds of high-speed Pentium processors to achieve performance levels exceeding those of the largest supercomputers.

Share This:

Data Warehouse a decision making process!

Data Warehouse Definition

Different people have different definitions for a data warehouse. The most popular definition came from Bill Inmon, an American computer scientist, recognized by many as the father of the data warehouse, who provided the following:

A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of management’s decision making process.

Subject-Oriented: A data warehouse can be used to analyze a particular subject area. For example, “sales” can be a particular subject.

Integrated: A data warehouse integrates data from multiple data sources. For example, source A and source B may have different ways of identifying a product, but in a data warehouse, there will be only a single way of identifying a product.

Time-Variant: Historical data is kept in a data warehouse. For example, one can retrieve data from 3 months, 6 months, 12 months, or even older data from a data warehouse. This contrasts with a transactions system, where often only the most recent data is kept. For example, a transaction system may hold the most recent address of a customer, where a data warehouse can hold all addresses associated with a customer.

Non-volatile: Once data is in the data warehouse, it will not change. So, historical data in a data warehouse should never be altered.

Ralph Kimball, who has established many of the industry’s best practices for data warehousing and business intelligence over the past three decades,  provided a more concise definition of a data warehouse:

A data warehouse is a copy of transaction data specifically structured for query and analysis.

This is a functional view of a data warehouse. Kimball did not address how the data warehouse is built like Inmon did; rather he focused on the functionality of a data warehouse.

Share This:

Ionic #2: Get Started with Ionic!

How much do you know about Ionic technology?

Check out these slides first!

http://ionicframework.com/present-ionic/slides/#/

Now as you know enough, here’s demo on how to start your first app

Go publish your app and let us know what do you think.

 

XeConcepts.com

Share This:

Ionic #1: All About Ionic

Ionic is a complete open-source SDK for hybrid mobile app development.  Built on top of AngularJS and Apache Cordova, Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps can be built with these Web technologies and then distributed through native app stores to be installed on devices by leveraging Cordova. Ionic was created by Max Lynch, Ben Sperry, and Adam Bradley of Drifty Co. in 2013.

If you’ve used other mobile development frameworks in the past, you should find Ionic fairly similar to use. But getting started with any framework is always daunting, so we will start simple and expand on some basic concepts. But first, we need to talk a bit about the Ionic project itself, where it fits into the dev stack, and why it was built.

What is Ionic, and where does it fit?

Ionic is an HTML5 mobile app development framework targeted at building hybrid mobile apps. Hybrid apps are essentially small websites running in a browser shell in an app that have access to the native platform layer. Hybrid apps have many benefits over pure native apps, specifically in terms of platform support, speed of development, and access to 3rd party code.

Think of Ionic as the front-end UI framework that handles all of the look and feel and UI interactions your app needs in order to be compelling. Kind of like “Bootstrap for Native,” but with support for a broad range of common native mobile components, slick animations, and beautiful design.

Unlike a responsive framework, Ionic comes with very native-styled mobile UI elements and layouts that you’d get with a native SDK on iOS or Android but didn’t really exist before on the web. Ionic also gives you some opinionated but powerful ways to build mobile applications that eclipse existing HTML5 development frameworks.

Since Ionic is an HTML5 framework, it needs a native wrapper like Cordova or PhoneGap in order to run as a native app. We strongly recommend using Cordova proper for your apps, and the Ionic tools will use Cordova underneath.

Why Ionic was built?

Because the creators of Ionic strongly believed that HTML5 would rule on mobile over time, exactly as it has on the desktop. Once desktop computers became powerful enough and browser technology had advanced enough, almost everyone was spending their computing time in the browser. And developers were overwhelmingly building web applications. With recent advancements in mobile technology, smartphones and tablets are now capable of running many of those same web applications.

With Ionic, the desire was to build an HTML5 mobile development framework that was focused on native or hybrid apps instead of mobile websites, since there were great tools already for mobile website development. So Ionic apps aren’t meant to be run in a mobile browser app like Chrome or Safari, but rather the low-level browser shell like iOS’s UIWebView or Android’s WebView, which are wrapped by tools like Cordova/PhoneGap.

And above all, they wanted to make sure Ionic was as open source as possible, both by having a permissive open source license that could be used in both commercial and open source apps, but by cultivating a strong community around the project. They felt there were too many frameworks that were technically open source, but were not open source in spirit or were not possible to use in both closed source and open source projects without purchasing a commercial license.

Building Hybrid Apps With Ionic

Those familiar with web development will find the structure of an Ionic app straightforward. At its core, it’s just a web page running in an native app shell! That means you can use any kind of HTML, CSS, and Javascript you want. The only difference is, instead of creating a website that others will link to, you are building a self-contained application experience.

The bulk of an Ionic app will be written in HTML, Javascript, and CSS. Eager developers might also dig down into the native layer with custom Cordova plugins or native code, but it’s not necessary to get a great app.

Ionic also uses AngularJS for a lot of the core functionality of the framework. While you can still use Ionic with just the CSS portion, we recommend investing in Angular as it’s one of the best ways to build browser-based applications today.

Get building!

Now that you have and understanding of what Ionic is and why it exists, you are ready to start building your first app with it. Follow our blog to  get everything installed and start building with Ionic!

XeConcepts.com

(Source)

Share This:

Hello Ionic!

We will be sharing with you why we use these incredible technologies.
This week’s topic is IONIC. What is it? How efficient it is? And what do our customers think about it?
Check out for our next posts!

XeConcepts.com

Share This: