Featured

The Art Of Selling A Tiny IT Project

Building tiny projects in your free time is something that you have to do as a programmer.

The main goal is to test out ideas that sometimes work and you can make a lot of money.

The process of selling a website or application had always been an art. 

Therefore, in this post I’ll tell you what it is like to sell a tiny project, and how I think anyone can.

1. Building a Project

According to many successful small-business founders and serial entrepreneurs.

 Your very best idea may not be quite what your customers want, for whatever reason. 

If you’ve spent all your capital on this one product that doesn’t quite get customers to buy, your business may run out of money before it recovers from the misstep. 

That’s a big risk to take, especially if you’re funding your new home-based business out of your own savings.

A more prudent plan is to start by offering a basic version of what you’ve heard your customers ask for and then ask for feedbacks.

One of the best books that I have read about how to make a successful startup is The Lean Startup for the author Eric Ries.

The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses

2. Meeting the Buyer

It can be a good idea to make your introductions in a relatively informal setting like a business lunch, where both parties can get to know each other a bit. 

This will help break the ice and can help you feel more at ease.

It’s important to not let your emotions get in the way, as this could influence the buyer’s position.

Clearly state why you want to sell your business, and try to remain as objective as possible. 

It’s possible that the buyer might say something that upsets you or makes you angry, but if this happens, do your best not to show it.

During the meeting, allow plenty of room for questions. If you don’t have the answer to a specific question, say that you’ll look into the matter and let hi, know by email.

Don’t shy away from speaking about challenges your business might be encountering. A buyer who’s really interested will find out anyway during due diligence, so you’re best advised to get everything out in the open.

I highly recommend the book : Get the Meeting!: An Illustrative Contact Marketing Playbook, it has a set of tools you need to get the meetings.

Get the Meeting!: An Illustrative Contact Marketing Playbook

3. Negotiating a Price

When negotiating, seek advantages that allow you to exploit your strength, but don’t disparage the other negotiator in your enthusiasm to obtain victory.

When a negotiation outcome is less than expected, learn from the experience. Commit to getting better. Increase your knowledge of how to use the right tactic, with the right strategy(s), aligned with the right situation.

Make sure you observe and control your biases when assessing the person with whom you’ll be negotiating.

I recommend the book: Never Split the Difference: Negotiating as if Your Life Depended on It, it shows how to be effective when negotiating.

Never Split the Difference: Negotiating as if Your Life Depended on It
Hostinger

4. Receive the payment

How do you securely exchange code for cash?

I think it would be better to use an escrow service.

Their are many platforms providing this service like: escrow.com

It works like this:

  • Buyer transfers to the escrow service.
  • you transfer the domain name, users, and Github repository to the buyer.
  • Brief video call explaining the code.
  • Buyer has some days to try everything out.
  • Escrow service transfers the money to you.

That’s it, the deal is done.

Conclusion:

I understand that to sell a projects you should at least having a small audience.

So, as a programmer, you have to be good technically and a good marketer in order to make money on the internet.

A good strategy could be :

  • Build things you enjoy
  • Write about the process
  • Attract a small audience
  • Attract opportunities (buyers, customers, job offers)

It’s that simple. Code something for a few weeks, maybe publish it on your own blog.

1 people will probably read it, and that’s awesome. Next time it might be ten!

Keep building lots of little things that pique your interest, talk about it, and great things will start happening.

Bonus:

The working environment of an average programmer entails sitting around a desk for long hours surrounded by gadgets.

The reality here is that, there is a huge possibility of programmers developing certain health conditions and computer related injuries.

From my personal experience, I am suffering sometimes from back pain, caused by long hours sitting in front of my computer, sometime with a wrong position.

I recommend a Posture Corrector to regain proper posture which can help to prevent the onset of back, neck and shoulder pain. The Posture Corrector helps provide alignment while sitting, standing, lying down or during your other daily activities.

I write one article per week about programming, thanks for supporting me on patreon, by being my contributor 🙂

Some related articles you might interest in :

1-Make The Code Better Than You Found It

2– 4 Practical Books for Software Architecture

3-The Design Cannot Be Taught

4– 6 Best Programmers of All Time

5-How To Make Your Code Reviewer Like You

6-Most Graduates Unable to Pass Coding Interviews

Featured

Most Graduates Unable to Pass Coding Interviews

The number one reason that most graduates that have a BS in Computer Science are not able to pass technical coding interviews might surprise you.

I had prepared for a technical interview for a web developer on our team. I have two questions ready, questions that I had asked candidates multiple times before.

The candidate came in and sat down. He had a degree in Computer Science, and a long list of credentials that would prepare him for a mid-level job, and I was prepared to interview a entry-level developer job.

I have explained the first question, and let him to answer.

The candidate struggled to write simple code on the white paper.

So, from this experience I have get this lesson:

As an interviewer, it is your job to identify candidates that are both smart and get things done. It turns out there are very few people who are both of these things that are interviewing for programming positions.

Software development is something that is difficult and that is why good programmers are in demand. Getting the skills to become an in-demand programmer actually isn’t that difficult at all.

There are only three takeaways that you need to be an in-demand developer, regardless of if you have a degree in Computer Science or not!

The developers who focus on mastering these three skills have huge advantages in their programming career. Here’s what actually matters.

Hostinger

1-Code writing principles come from the 70’s

General principles of software development come a long way and have not changed much since. 

Sure the languages have evolved and we have built on these simple principles to create great things. BUT, the same principles do still apply, so make sure to become very well accustomed to them.

CHECK THIS BOOK: Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

2-Group code by responsibility

Code has a very specific reason of existence.

 It solves a problem. 

If you follow this problem backwards then you will end up on a specific role or even a specific person. 

Group code with similar responsibilities together.

CHECK THIS BOOK: The Pragmatic Programmer by David Thomas and Andrew Hunt

3-Do not stop learning

One of the worst things that can happen to a software developer is to become obsolete. 

Getting ahead of your time is a key skill to have in these days of change. Getting yourself accustomed with new languages and frameworks. 

Follow or even contribute to the software community. 

Talk with other professionals and keep yourself relevant.

CHECK THIS BOOK: The Clean Coder by Robert C. Martin

Conclusion:

Once you understand these concepts and work improve yourself, you’ll find that you’re the type of in-demand developer who is able to pass coding interviews with ease.

Bonus:

The working environment of an average programmer entails sitting around a desk for long hours surrounded by gadgets.

The reality here is that, there is a huge possibility of programmers developing certain health conditions and computer related injuries.

From my personal experience, I am suffering sometimes from back pain, caused by long hours sitting in front of my computer, sometime with a wrong position.

I recommend a Posture Corrector to regain proper posture which can help to prevent the onset of back, neck and shoulder pain. The Posture Corrector helps provide alignment while sitting, standing, lying down or during your other daily activities.


I write one article per week about programming, thanks for supporting me on patreon, by being my contributor 🙂

Some related articles you might interest in :

1-Make The Code Better Than You Found It

2– 4 Practical Books for Software Architecture

3-The Design Cannot Be Taught

4– 6 Best Programmers of All Time

5-How To Make Your Code Reviewer Like You

Featured

How To Make Your Code Reviewer Like You

When we talk about code reviews, we focus on the reviewer.

But the developer who writes the code is just as important to the review as the person who reads it.

This article talks about the best books, which show the best practices for participating in a code review when you’re the author. 

You’re going to be so good at sending out your code for review that your reviewer will like you.

1- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

The Clean Code

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

This book’s biggest strength is that it includes tons of code examples, including some long and in-depth ones.

Instead of just listing rules or principles of clean code, many of the chapters go through these code examples and iteratively improve them.

This book is a must-read for every professional software developer, how wants to pass the code review easily.

Strongly recommended!

2-Best Kept Secrets of Peer Code Review: Modern Approach. Practical Advice. (Modern Approach. Practical Advice.) by Smart Bear Inc.

Best Kept Secrets of Peer Code Review

Ten practical Essays from industry experts giving specific techniques for effective peer code review.

This book is nice and short; it provides actionable and useful tips for code review.

Highly recommended !

3- Learning Gerrit Code Review by Packt Publishing

Learning Gerrit Code Review

Learning Gerrit Code Review is a practical guide that provides you with step-by-step instructions for the installation, configuration, and use of Gerrit code review. 

Learning Gerrit Code Review is a practical guide that provides you with step-by-step instructions for the installation, configuration, and use of Gerrit code review. 

Using this book speeds up your adoption of Gerrit through the use of a unique, consolidated set of recipes ready to be used for LDAP authentication and to integrate Gerrit with Jenkins and GitHub.

Very practical and concise book that guides you through the basic principles of code review and set up of Gerrit.

4- The Pragmatic Programmer by David Thomas and Andrew Hunt

The pragmatic Programmer

This book will teach not about codes, sometimes you can read but the main idea is to be a better programmer and think better, crack the problems, think about algorithm by yourself.

Its divided in some topics inside some subjects. You can read by look or follow cover to the end.

This book does not have a specific language or a ‘recipe’ to follow, actually it will open your mind to think better.

I think this book is a must for every programmer.

Conclusion:

 As you participate in code reviews, look for patterns that stall progress or waste effort.

The more you value your reviewer’s time, the more reviewer generates high-quality feedbacks.

If you require them to untangle your code or police simple mistakes, you both suffer.

Emotions run hot when critiquing someone else’s work, But be conscious of pitfalls that could make your reviewer feel attacked or disrespected.

Bonus:

The working environment of an average programmer entails sitting around a desk for long hours surrounded by gadgets.

The reality here is that, there is a huge possibility of programmers developing certain health conditions and computer related injuries.

From my personal experience, I am suffering sometimes from back pain, caused by long hours sitting in front of my computer, sometime with a wrong position.

I recommend a Posture Corrector to regain proper posture which can help to prevent the onset of back, neck and shoulder pain. The Posture Corrector helps provide alignment while sitting, standing, lying down or during your other daily activities.


I write one article per week about programming, thanks for supporting me on patreon, by being my contributor 🙂

Some related articles you might interest in :

1-Make The Code Better Than You Found It

2– 4 Practical Books for Software Architecture

3-The Design Cannot Be Taught

4– 6 Best Programmers of All Time

Featured

Make The Code Better Than You Found It

As a developer you will have a lot of time maintaining working code.

There are definitely times where you are writing more new code than maintaining, upgrading, bug fixing and improving old code, but in general code is expensive and folks want to run it for a long time.

Often you’ll jump into code to fix a bug, investigate an issue or answer a question.

When you do so, improve it. 

This doesn’t mean you rewrite it, or upgrade all the libraries it depends on, or rename all the variables.

You don’t need to transform it.

Hostinger

But you should make it better. Just clean it up a bit. 

So, in this article I will share with you some books that will help you to be a clean coder.

1- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

The Clean Code

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

This book’s biggest strength is that it includes tons of code examples, including some long and in-depth ones.

Instead of just listing rules or principles of clean code, many of the chapters go through these code examples and iteratively improve them.

This book is a must-read for every professional software developer.

Strongly recommended!

2-The Clean Coder by Robert C. Martin

The Clean Coder

This book contains practical advice about everything from estimating and coding to refactoring and testing.

You will learn how to communicate, estimate and deal with difficult situations at work.

The Clean Coder will help you become one of the best professional.

3-Head First Design Pattern by Eric Freeman, Kathy Sierra

Head First Design Patterns

This book is a fast-track to design patterns, battle-proven solutions to commonly occurring problems in software design.

The book presents a complicated topic in a fun, readable and practical way.

Head First Design Patterns uses a visually rich format designed for the way your brain works, not a text-heavy approach that puts you to sleep.

Must-read for every developer doing OO design.

4-GROWING OBJECT-ORIENTED SOFTWARE GUIDED BY TESTS By Steve Freeman and Nat Pryce

Growing Object-Oriented Software

Test-Driven Development (TDD) is now an established technique for delivering better software faster. 

TDD is based on a simple idea: Write tests for your code before you write the code itself. 

However, this “simple” idea takes skill and judgment to do well.

This book shows how to create a realistic project using TDD and is full of code examples. 

Strongly recommended for TDD Developer !

5-Release It!: Design and Deploy Production-Ready Software by Michael Nygard 

Release it !

If you’re a software developer, and you don’t want to get alerts every night for the rest of your life, help is here. 

With a combination of case studies about huge losses, lost revenue, lost reputation, lost time, lost opportunity and practical, down-to-earth advice that was all gained through painful experience.

 This book helps you avoid the pitfalls that cost companies millions of dollars in downtime and reputation. 

Get this book to skip the pain and get the experience

Conclusion:

Code isn’t everything, but it is an important work output.

 Whenever you touch it, you should strive to leave it in a better place that it was before.

So, these books will help you to be an excellent craftmanship programmer.

Bonus:

The working environment of an average programmer entails sitting around a desk for long hours surrounded by gadgets.

The reality here is that, there is a huge possibility of programmers developing certain health conditions and computer related injuries.

From my personal experience, I am suffering sometimes from back pain, caused by long hours sitting in front of my computer, sometime with a wrong position.

I recommend a Posture Corrector to regain proper posture which can help to prevent the onset of back, neck and shoulder pain. The Posture Corrector helps provide alignment while sitting, standing, lying down or during your other daily activities.


I write one article per week about programming, thanks for supporting me on patreon, by being my contributor 🙂

Some related articles you might interest in :

1-Invest Your Golden Time in Transferable Skills

2– 4 Practical Books for Software Architecture

3-The Design Cannot Be Taught

4– 6 Best Programmers of All Time

Featured

Invest Your Golden Time in Transferable Skills

As the world of technologies goes very fast. We need to stay up to date with technology. 

Every day, we learn programming languages, frameworks, and libraries.

The more modern tools we know “the better”.

Time is limited, nonrenewable and you cannot buy more of it.

Technology is moving faster than ever before.

To catch up, we need to run very fast. This race has no winners because it has no end.

So, Invest your golden time in transferable skills. Skills that will always be relevant.

Instead, reading a lot of books about frameworks, libraries, etc. Focus on books that teach you the fundamentals.

Example:

  • Instead of new programming language focus on Clean Code, Design Patterns, DDD
  • Instead of Docker learn more about Continuous Delivery
  • Instead of Angular learn more about Web, HTTP and REST
  • Instead of Microservices frameworks focus on Evolutionary Architecture

In this article I will share with you five excellent books, that changed my life, which teach the fundamentals:

1- The Pragmatic Programmer by David Thomas and Andrew Hunt 

The Pragmatic Programmer Book

The Pragmatic Programmer is one of those rare tech books you’ll read, re-read, and read again over the years. 

Whether you’re new to the field or an experienced practitioner, you’ll come away with fresh insights each and every time.

This book will teach not about codes, sometimes you can read but the main idea is to be a better programmer and think better, crack the problems, think about algorithm by yourself. 

There is not much to say, see the first edition was written 20 years ago and when you start reading you see the quality of the book.

Its divided in some topics inside some subjects. You can read by look or follow cover to the end.

This book does not have a specific language or a ‘recipe’ to follow, actually it will open your mind to think better.

I think this book is a must for every programmer.

2- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin

Clean Code: A Handbook of Agile Software Craftsmanship

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. 

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

This book’s biggest strength is that it includes tons of code examples, including some long and in-depth ones. 

Instead of just listing rules or principles of clean code, many of the chapters go through these code examples and iteratively improve them. 

This book is a must-read for every professional software developer.

Strongly recommended!

3-Head First Design Pattern by Eric Freeman, Kathy Sierra

Head First Design Pattern

This book is a fast-track to design patterns, battle-proven solutions to commonly occurring problems in software design. 

The book presents a complicated topic in a fun, readable and practical way. 

Head First Design Patterns uses a visually rich format designed for the way your brain works, not a text-heavy approach that puts you to sleep.

Must-read for every developer doing OO design.

4-The Clean Coder by Robert C. Martin

The Clean Coder

This book contains practical advice about everything from estimating and coding to refactoring and testing. 

You will learn how to communicate, estimate and deal with difficult situations at work.

The Clean Coder will help you become one of the best professional and earn the pride and fulfillment that they alone possess.

5-Continuous Delivery By Jez Humble and David Farley

Continuous Delivery

This book sets out principles and technical practices that enable rapid delivery of software to users. 

Through automation of the build, deployment, and testing process.

Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. 

Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance.

This book will help you to deliver fast and effectively.

Conclusion:

The longer a technology is on the market, the safer investment it is.

Don’t harry to learn new technology, it has a high probability of dying.

Time is your best advisor. Learn to wait.

Frameworks, libraries and tools come and go. Time is precious.

So, Invest 70% percent of your time on fundamentals and 30% on frameworks, libraries and tools.

Bonus:

The working environment of an average programmer entails sitting around a desk for long hours surrounded by gadgets. 

The reality here is that, there is a huge possibility of programmers developing certain health conditions and computer related injuries.

From my personal experience, I am suffering sometimes from back pain, caused by long hours sitting in front of my computer, sometime with a wrong position.

I recommend a Posture Corrector to regain proper posture which can help to prevent the onset of back, neck and shoulder pain. The Posture Corrector helps provide alignment while sitting, standing, lying down or during your other daily activities.


I write one article per week about programming, thanks for supporting me on patreon, by being my contributor 🙂

Some related articles you might interest in :

1– 4 Practical Books for Software Architecture

2-The Design Cannot Be Taught

3– 6 Best Programmers of All Time

Featured

Most Commonly Asked Java/JEE Interview Questions (Part-2)

In this article I will talk about next part of commonly asked Java/JEE interview questions.

Before, jumping to the list, I want to mention that It is great to be good technically, but you have to be also great on communication to succeed.

Being a good communicator will help you get the interview easier.

One of the tools will help you to write a good cover letter and email message is Grammarly’s AI-powered writing assistant

So, let’s go back to our list. 

25) What is a thin client?

A J2EE application client runs on a client machine and can provide a richer user interface than can be provided by a markup language.

An application client is typically downloaded from the server, but can be installed on a client machine.

26) Differentiate between .ear, .jar and .war files

a) .jar files

These files are with the .jar extension.

The .jar files contains the libraries, resources and accessories files like property files.

b).war files

These files are with the .war extension. 

The .war file contains jsp, html, javascript and other files for necessary for the development of web applications.

c).ear files

EAR is a file format used by Java EE for packaging one or more modules into a single archive so that the deployment of the various modules onto an application server happens simultaneously and coherently.

27) What are the JSP tag?

In JSP tags can be divided into 4 different types.
Directives, Declarations, Scriplets and Expressions

28) What are JSP Directives?

a) page Directives
b) include Directives
c) taglib Directives

29) What is Struts?

Struts framework is a Model-View-Controller(MVC) architecture for designing large scale applications. 

Which is combines of Java Servlets, JSP, Custom tags, and message.

Struts helps create an extensible development environment for the application, based on published standards and proven design patterns.

Model in many applications represent the internal state of the system as a set of one or more JavaBeans.

The View is most often constructed using JavaServer Pages (JSP) technology.

The Controller is focused on receiving requests from the client and producing the next phase of the user interface to an appropriate View component.

The primary component of the Controller in the framework is a servlet of class ActionServlet.

This servlet is configured by defining a set of ActionMappings.

30) What is ActionErrors?

ActionErrors object that encapsulates any validation errors that have been found. 

If no errors are found, return null or an ActionErrors object with no recorded error messages.

The default implementation attempts to forward to the HTTP version of this method. 

Holding request parameters mapping and request and returns set of validation errors, if validation failed; an empty set or null

31) What is ActionForm?

ActionForm is a Java bean that associates one or more ActionMappings.

A java bean become FormBean when extend org.apache.struts.action.ActionForm class. 

ActionForm object is automatically populated on the server side which data has been entered by the client from UI.

ActionForm maintains the session state for web application.

32) What is action mapping ?

The ActionMapping represents the information that the ActionServlet knows about the mapping of a particular request to an instance of a particular
Action class.

The mapping is passed to the execute() method of the Action class, enabling access to this information directly.

33) What is the MVC on struts.

MVC stands Model-View-Controller.

Model : Model in many applications represent the internal state of the system as a set of one or more JavaBeans.

View : The View is most often constructed using JavaServer Pages (JSP) technology.

Controller : The Controller is focused on receiving requests from the client and producing the next phase of the user interface to an appropriate View component. 

The primary component of the Controller in the framework is a servlet of class ActionServlet. 

This servlet is configured by defining a set of ActionMappings.

34) What are different modules in spring?

There are seven core modules in spring: 

1- The Core container module
2- O/R mapping module (Object/Relational)
3- DAO module
4- Application context module
5- Aspect Oriented Programming
6- Web module
7- MVC module

35) What is Spring?

Spring is a light weight open source framework for the development of enterprise application that resolves the complexity of enterprise application development also providing a cohesive framework for J2EE application development. 

Which is primarily based on IOC (inversion of control) or DI (dependency injection) design pattern.

36) Functionality of ActionServlet and RequestProcessor?

Receiving the HttpServletRequest

Populating JavaBean from the request parameters

Displaying response on the web page Issues

Content type issues handling

Provide extension points

37) ActionServlet, RequestProcessor and Action classes are the components of

Controller

38) What is default scope in Spring?

Singleton.

39) What are advantages of Spring usage?

Pojo based programming enables reuse component.
Improve productivity and subsequently reduce development cost.
Dependency Injection can be used to improve testability.
Spring required enterprise services without a need of expensive application server.
It reduces coupling in code and improves maintainability.

40)What are the Benefits Spring Framework ?

Light weight container.

No need to read from properties file application code.

It is much easier to unit test Objects are created Lazily. 

Spring’s configuration management services can be used in any architectural layer, in whatever runtime environment

41) What is servlet?
Servlet is a server side components that provide a powerful mechanism for developing server side programs. 

Servlet is a server as well as platform-independent and Servlets are designed for a various protocols. 

Most commonly used HTTP protocols. 

Servlet uses the classes in the java packages javax.servlet, javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, javax.servlet.http.HttpSession.

All servlets must implement the Servlet interface, which defines life-cycle methods.

42) Servlet is pure java object or not?

Yes, pure java object.

43) What are the phases of the servlet life cycle?

The life cycle of a servlet consists of the following phases:
Servlet class loading
Servlet instantiation
the init method
Request handling (call the service method)
Removal from service (call the destroy method)

44) What must be implemented by all Servlets?

The Servlet Interface must be implemented by all servlets

Conclusion:

I hope that this article will give you a great insight into JAVA/JEE interview questions and answers.

The responses given above will really enrich your knowledge and increase your understanding of JAVA/JEE programming.

Don’t forget the communication plays a big role in the recruiting process.

Make sure to use Grammarly’s AI-powered writing assistant for correcting your texts before sending them to your future employers.

Bonus:

By applying universal rules of software architecture, you can dramatically improve developer productivity throughout the life of any software system.

Now, building upon the success of his best-selling books Clean Code and The Clean Coder, legendary software craftsman Robert C. Martin (“Uncle Bob”) reveals those rules and helps you apply them.

Clean Architecture: A Craftsman’s Guide to Software Structure and Design (Robert C. Martin Series)

Some related articles you might interest in :

1-Most Commonly Asked Java/JEE Interview Questions (Part-1)

2-OOP is Now The Basis of Computer Science

3- 6 Best Programmers of All Time

4-The Most Promising Fields for Programming in the Future

5-The 5 Most Used Languages for Web Development

6- The Best Way To Improve Your Programming Skill Level

7- Recommended Programming Language for Beginner To LEARN First

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

Most Commonly Asked Java/JEE Interview Questions (Part-1)

In this article I will talk about some most commonly asked Java/JEE interview questions.

1- What is J2EE ?

J2EE means Java 2 Enterprise Edition. 

The functionality of J2EE is developing multitier web based applications .

The J2EE platform is consists of a set of services, application programming
interfaces (APIs), and protocols.

2) What are the four components of J2EE application?

Application clients components.

Servlet and JSP technology are web components.

Business components (JavaBeans).

Resource adapter components

3) What are types of J2EE clients?

Applets

Application clients

Java Web Start-enabled clients, by Java Web Start technology.

Wireless clients, based on MIDP technology.

4) What are considered as a web component?

Java Servlet and Java Server Pages technology components are web components. 

Servlet is a Java software component that dynamically receive requests and make responses.

 JSP pages execute as servlets but allow a more natural approach to creating static content.

5) What is JSF?

JavaServer Faces (JSF) is a user interface (UI) designing framework for Java web applications.

JSF provide a set of reusable UI components, standard for web applications.

JSF is based on MVC design pattern. 

6) Define Hash table

HashTable is just like Hash Map, but synchronised.

Hashtable stores key/value pair.

7) What is Hibernate?

Hibernate is a open source object-relational mapping and query service. 

In hibernate we can write HQL instead of SQL, which save developers to spend more time on writing the native SQL.

Hibernate has more powerful association, inheritance, polymorphism, composition, and collections. 

8 ) What is the limitation of hibernate?

Slower in executing the queries than queries are used directly.

Only query language support for composite keys.

No shared references to value types.

9) What are the advantage of hibernate.

Hibernate is database independent. It can be used to connect with any database like Oracle, MySQL, Sybase and DB2 to name a few.

Hibernate supports a powerful query language called HQL (Hibernate Query Language).

Hibernate’s transparent persistence ensures the automatic connection between the application’s objects with the database tables.

10) What is ORM?

ORM stands for Object-Relational mapping. 

The objects in a Java class which is mapped in to the tables of a relational database using the metadata that describes the mapping between the objects and the database. 

It works by transforming the data from one representation to another.

11) Difference between save and saveorupdate

a) save()

 This method in hibernate is used to store an object into the database. 

It insert an entry if the record doesn’t exist, otherwise not.

b) saveorupdate ()

This method in the hibernate is used for updating the object using identifier.

 If the identifier is missing this method calls save().

 If the identifier exists, it will call update method.

Hibernate generates lot of SQL statements in runtime based on our mapping, so it is bit slower than JDBC.

12) Difference between load and get method?

get()method returns null if the object can’t be found. 

The load() method may return a proxy instead of a real persistent instance get() never returns a proxy.

13) How to invoke stored procedure in hibernate?
{ ? = call thisISTheProcedure() }

14) What are the benefits of ORM?

Productivity

Maintainability

Performance

Vendor independence

15) What the Core interfaces of hibernate framework?
Session Interface

SessionFactory Interface

Configuration Interface

Transaction Interface

Query and Criteria Interface

16) What is the file extension used for hibernate mapping file?

The name of the file should be like this : filename.hbm.xml

17) What is the file name of hibernate configuration file?

The name of the file should be like this : hibernate.cfg.xml

18) How hibernate or JPA is database independent ?

Database independency means writing no code dependent to the database vendor.

 Hibernate, or in general JPA, prevents you from writing code according to the Oracle specifications or MySQL specifications.

You use JPA classes and interfaces and make JPA implementation(like Hibernate) do the rest.

19) Define connection pooling?

Connection pooling is a mechanism reuse the connection.

which contains the number of already created object connection. 

So, whenever there is a necessary for object, this mechanism is used to directly get objects without creating it.

20) What is the hibernate proxy?
An object proxy is just a way to avoid retrieving an object until you need it. Hibernate 2 does not proxy objects by default.

21) What is HQL?

HQL stands for Hibernate Query Language. 

Hibernate allows to the user to express queries in its own portable SQL extension and this is called as HQL. 

It also allows the user to express in native SQL.

22) What are the Collection types in Hibernate ?

Set, List, Array, Map, Bag

Conclusion:

So this brings us to the end of the first part of Java interview questions.

These set of Java Interview Questions will definitely help you succeed in your job interview.

Thanks for checking the next part.

Good luck 🙂

Bonus:

It is great to be good technically, but you have to be also great on communication to succeed. 

Communication skills, play big role when writing documentation for frameworks and libraries, or when sending emails or slack messages to coworkers. 

They’re an important factor in how two or more people convey complex ideas and concepts to each other, which is core to collaborating as a software developer.

 And, more recently, communication skills have become an important part of software developer interviews, where most companies will check for a level of aptitude in a candidate’s communication skills.

So, it is very good to have a tool that will help you compose bold, clear, mistake-free writing.

I recommend Grammarly’s AI-powered writing assistant.

Some related articles you might interest in :

1-5 Principles Will Make your Code Robust

2-OOP is Now The Basis of Computer Science

3- 6 Best Programmers of All Time

4-The Most Promising Fields for Programming in the Future

5-The 5 Most Used Languages for Web Development

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

5 Principles Will Make your Code Robust

The 5 principles I will talk about, remain as relevant to day as they were in before.

According Uncle Bob, The software hasn’t change all that much since 1945 when Turing wrote the first lines of code for an electronic computer. 

Software is still if statements, while loops, and assignment statements: Sequence, Selection, and Iteration.

So let’s walk through the principles, one by one.

1- The Single Responsibility Principle(SRP)

Gather together the things that change for the same reasons. Separate things that change for different reasons.

We do not mix business rules with GUI code. 

We do not mix SQL queries with communications protocols.

We keep code that is changed for different reasons separate so that changes to one part to not break other parts.

We make sure that modules that change for different reasons do not have dependencies that tangle them.

2- The Open-Closed Principle(OCP)

A Module should be open for extension but closed for modification.

It about creating modules that can be extended without modifying them. 

Can you imagine working in a system that did not have device independence, where writing to a disk file was fundamentally different than writing to a printer, or a screen, or a pipe ? 

Do we want to see if statement disperse our code to deal with all the little details ?

Or Do we want to separate abstract concepts from detailed concepts ?

We want to keep business rules isolated from the nasty little details of the GUI, and the micro-service communications protocols, and the arbitrary behaviors of the database.

3- The Liskov Substitution Principle(LSP)

A program that uses an interface must not be confused by an implementation of that interface.

We have made the mistake that this is about inheritance.

It is not. It is about sub-typing. All implementations of interfaces are subtypes of an interface.

This principle is about keeping abstractions crisp and well-defined. It is impossible to believe that this is an outmoded concept.

4- The Interface Segregation Principle(ISP)

Keep interfaces small so that users don’t end up depending on things they don’t need.

We still work with compiled languages. 

We still depend upon modification dates to determine which modules should be recompiled and redeployed.

So long as this is true we will have to face the problem that when module A depends on module B at compile time, but not at run time, then changes to module B will force recompilation and redeployment of module A.

This issue is especially acute in statically typed languages like Java, C#, C++ etc.

5- The Dependency Inversion Principle (DIP)

Depend in the direction of abstraction. High level modules should not depend upon low level details.

It is hard to imagine an architecture that does not make significant use of this principle. 

We do not want our high level business rules depending upon low level details. 

We want isolation of the high level abstractions from the low level details. 

That separation is achieved by carefully managing the dependencies within the system so that all source code dependencies, especially those that cross architectural boundaries, point towards high level abstractions, not low level details.

Conclusion:

The Code that follows S.O.L.I.D. principles can more easily be shared with collaborators, extended, modified, tested, and refactored without any problems.

Bonus:

By applying universal rules of software architecture, you can dramatically improve developer productivity throughout the life of any software system. 

Now, building upon the success of his best-selling books Clean Code and The Clean Coder, legendary software craftsman Robert C. Martin (“Uncle Bob”) reveals those rules and helps you apply them.

Get your copy using the link below:

Clean Architecture: A Craftsman’s Guide to Software Structure and Design (Robert C. Martin Series)

Some related articles you might interest in :

1-OOP is Now The Basis of Computer Science

2- 6 Best Programmers of All Time

3-The Most Promising Fields for Programming in the Future

4-The 5 Most Used Languages for Web Development

5- The Best Way To Improve Your Programming Skill Level

6- Recommended Programming Language for Beginner To LEARN First

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

OOP is Now The Basis of Computer Science

Object- oriented programming today is the basis of computer science.

In simple terms I would like to explain it with an example.

First of all think that why are we all doing programming ? and answer to this question is to solve real life problems, to save human efforts and time. 

So, imagine you are the one who have the responsibility to add cash in the ATM machine as a worker.

 So, you need to apply some logic that how many notes of 500 , 2000 or 100 you need to keep there, so the demands of the person taking out money is full filled.

Here come the approach to object- oriented programming.

Imagine yourself as an object in this real world. So you have some characteristics like height, weight etc. And you take part in solving real world problems.

On OOP the object is represented by its data, its behavior, and functions associated with it .

Simple and common example is apple is an object of class fruits and have features like red color, sweet taste etc.

OOP concept includes the 2 basic terms CLASS AND OBJECT.

For example C language uses procedural approach, the program will be executed in the flow written by the programmer.

But, using C++ or java language you can divide the problem into class and object approach and solve using functions and other features of object- oriented programming.

Now to solve the real life complex problems, object-oriented programming it provides the following features to the users:


OOP

1- DATA ABSTRACTION

Showing only the essential details hiding other details, like when you turn a switch on you just press the button but you are unaware of the wiring and connections inside.

2- INHERITANCE 

When one class can inherit the features of base class or parent class.

3- POLYMORPHISM

It is the ability of an object to take on many forms.

The most common use of polymorphism in OOP occurs when a parent class reference is used to refer to a child class object. 

4-DATA ENCAPSULATION

Wrapping up data and functions into one unit.

5- MODULARITY

We divide the program into small units to reduce the degree of complexity, and use these modules again and again according to the need of the programmer.

Conclusion:

A lot of developers criticize the object-oriented programming model for multiple reasons. 

The largest concern is that OOP overemphasizes the data component of software development and does not focus enough on computation or algorithms. 

Additionally, OOP code may be more complicated to write and take longer to compile.

Alternative methods to OOP include:

  • functional programming
  • structured programming
  • imperative programming

Most advanced programming languages give developers the option to combine these models.

Bonus:

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

Noted software expert Robert C. Martin presents a revolutionary paradigm with Clean Code: A Handbook of Agile Software Craftsmanship.

Martin has teamed up with his colleagues from Object Mentor to distill their best agile practice of cleaning code on the fly into a book that will instill within you the values of a software craftsman and make you a better programmer, but only if you work at it.

Get your copy using the link below:

Clean Code: A Handbook of Agile Software Craftsmanship

Some related articles you might interest in :

1- 6 Best Programmers of All Time

2-The Most Promising Fields for Programming in the Future

3-The 5 Most Used Languages for Web Development

4- The Best Way To Improve Your Programming Skill Level

5- Recommended Programming Language for Beginner To LEARN First

6- Don’t Panic This is The Best way to Learn Programming

7- 4 Great YouTube Channels, that Will Improve Your Programming Skill

8-It is Never too Late to Learn How to Program

9-The Best Advice I Wish I know When I Start Programming

Connect with me on :Youtube, Facebook, Twitter

Featured

6 Best Programmers of All Time

In this article I am going to talk about top 6 programmers in the world of all time.

1. Dennis Ritchie

Dennis Ritchie

Dennis Ritchie was an American computer scientist who helped shape the digital era. 

He created the C programming language and with long-time colleague Ken Thompson, the Unix operating system. 

Ritchie and Thompson received the Turing Award from the ACM in 1983, the Hamming Medal from the IEEE in 1990 and the National Medal of Technology from President Clinton in 1999.

Ritchie was the head of Lucent Technologies System Software Research Department when he retired in 2007.

2. Bjarne Stroustrup

Bjarne Stroustrup

Bjarne Stroustrup is a Danish computer scientist, most notable for the creation and development of the widely used C++ programming language. 

He is a Distinguished Research Professor and holds the College of Engineering Chair in Computer Science at Texas A&M University, a visiting professor at Columbia University, and works at Morgan Stanley.

3. James Gosling

James Gosling

James Arthur Gosling is a Canadian computer scientist, best known as the father of the Java programming language. 

Due to his extra-ordinary achievements, Gosling was elected to Foreign Associate member of the United States National Academy of Engineering.

4. Linus Torvalds

Linus Torvalds

Linus Benedict Torvalds is a Finnish American software engineer, who was the principal force behind the development of the Linux kernel.

He later became the chief architect of the Linux kernel, and now acts as the project’s coordinator.

He also created the revision control system Git as well as the diving log software Subsurface.

He was honored, along with Shinya Yamanaka, with the 2012 Millennium Technology Prize by the Technology Academy Finland, in recognition of his creation of a new open source operating system, for computers leading to the widely used Linux kernel.

5. Anders Hejlsberg

Anders Hejlsberg

Anders Hejlsberg is a prominent Danish software engineer who co-designed several popular and commercially successful programming languages and development tools. 

He is creator of popular programming language C#. 

He was the original author of Turbo Pascal and the chief architect of Delphi. 

He currently works for Microsoft as the lead architect of C# and core developer on TypeScript.

6. Donald Knuth

Donald Knuth

Donald Ervin Knuth is an American computer scientist, mathematician, and Professor Emeritus at Stanford University. 

He is the author of the multi-volume work The Art of Computer Programming. 

Knuth has been called the father of the analysis of algorithms. 

He contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. 

In the process he also popularized the asymptotic notation. 

Knuth is the creator of the TeX computer typesetting system, the related METAFONT font definition language and rendering system and the Computer Modern family of typefaces.

Conclusion:

It’s very difficult to name just 6 though. 

There are many important contributors to the world of computer science whose names are not widely known. 

But, these names in my opinion are the top.

Bonus:

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

Noted software expert Robert C. Martin presents a revolutionary paradigm with Clean Code: A Handbook of Agile Software Craftsmanship.

Martin has teamed up with his colleagues from Object Mentor to distill their best agile practice of cleaning code on the fly into a book that will instill within you the values of a software craftsman and make you a better programmer, but only if you work at it.

Get your copy using the link below:

Clean Code: A Handbook of Agile Software Craftsmanship

Some related articles you might interest in :

1-The Most Promising Fields for Programming in the Future

2-The 5 Most Used Languages for Web Development

3- The Best Way To Improve Your Programming Skill Level

4- Recommended Programming Language for Beginner To LEARN First

5- Don’t Panic This is The Best way to Learn Programming

6- 4 Great YouTube Channels, that Will Improve Your Programming Skill

7-It is Never too Late to Learn How to Program

8-The Best Advice I Wish I know When I Start Programming

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The Most Promising Fields for Programming in the Future

Technology trends come and go. 

A decade ago, the buzzwords were AI and Machine Learning.

Half a decade ago, Cloud was all the rage.

Now we’re talking about Big Data and Kafka.

Who knows what the buzzwords will be in another 5–10 years.

The beginning of the 21st Century was not the first time that AI was a buzzword. 

The previous time occurred in the era of John McCarthy, one of the smartest and most innovative computer scientists ever to exist.

Anyhow, back to the point: AI was a big thing in the 1960s.

 Now, I’m going to ask you, What’s actually changed about AI from the time they were using Lisp back in the McCarthy era, albeit on much less powerful machines, and AI in the 2000s when it saw a resurgence?

From the 70s through the 90s, there was the AI Winter wherein people were disillusioned about the power of computers because they didn’t go and take over the world in the 1970s.

Now, people are slowly losing faith in that little A buzzword because computers didn’t take over the world in the 2000s.

Now, they’re just excited about having more powerful computers, be able to look at bigger and bigger data sets, and using queues to connect different processes together, kind of like a factory.

Hostinger

So, what’s the lesson?

Don’t worry too much about the fields of the future.

The buzzwords will change with the times. The science will plod along as science generally tends to do. 

The engineering will be applied science as it always has been.

Conclusion:

You might ask this question:

What can I do ?

Become competent!

Learn how to think and program properly. 

Understand how to get a system to do what you want it to do. 

If you can do this, you’ll be fine no matter what field you wind up in at any given point.

Bonus:

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

Noted software expert Robert C. Martin presents a revolutionary paradigm with Clean Code: A Handbook of Agile Software Craftsmanship.

Martin has teamed up with his colleagues from Object Mentor to distill their best agile practice of cleaning code on the fly into a book that will instill within you the values of a software craftsman and make you a better programmer, but only if you work at it.

Get your copy using the link below:

Clean Code: A Handbook of Agile Software Craftsmanship

Some related articles you might interest in :

1-The 5 Most Used Languages for Web Development

2- The Best Way To Improve Your Programming Skill Level

3- Recommended Programming Language for Beginner To LEARN First

4- Don’t Panic This is The Best way to Learn Programming

5- 4 Great YouTube Channels, that Will Improve Your Programming Skill

6-It is Never too Late to Learn How to Program

7-The Best Advice I Wish I know When I Start Programming

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The 5 Most Used Languages for Web Development

To give you an idea of the options available, the 5 most commonly used languages for web development at the moment, in no particular order, are:

  • PHP
  • Ruby
  • Python
  • Java
  • JavaScript

You might see some people mention ASP.NET, this isn’t a language. It’s a framework.

I’ll talk about the difference later, but for now let’s just say that while ASP.NET is a valid choice as a framework, as a beginner I wouldn’t worry about it.

This is going to be a little bit more than you asked for, probably, but in the end I think it’ll help you out more with your decisions. If you already know parts of the following, feel free to skip over.

Overview of how a web application

You, the user, enter a url into the address bar in your browser.

Using the information in that url, your browser identifies what you want and where to look for it and sends a request to the application’s server.

This request is sent using a protocol called HTTP. A protocol just means that your browser and the server follow a set of rules when they talk to each other. HTTP is a particular set of rules.

When the server receives the request your browser sent, it looks at it and decides what to send back to your browser.

When you first visit a web application, this is almost always a HTML document.

So, the server returns a response to your browser, which contains HTML.

HTML contains directions for your browser on what to display to the user. It also tells your browser the other things it will need to display the application correctly to the user.

Primarily, CSS and JavaScript documents, but also images, fonts and many other things.

The browser then makes more requests to get these other things, let’s call them assets.

These requests for assets can be made to any number of different servers, and they take time!

A good web application does it’s best to make this process efficient and undetectable to the user.

A link in a webpage is like a shortcut to entering the url in your address bar, and most of the time when you enter in a url or click a link, this whole process is initiated again.

Back-end development

Back-end development is all about the server. Or servers.

As a web application may be contained, either completely or in parts, across many different servers.

It’s about how the application’s servers decide to respond to a request.

It’s about how quickly they can respond, and how many requests they can respond to at the same time.

As a side note, for beginners to web development, front-end development is almost always more important.

Even if you’re not concerned with making your application pretty.

Even for the first project you mentioned. I highly recommend you start by focusing on HTML, CSS and JavaScript.

What programming languages should I learn for web development ?

Before, I answer this question, I will give you a definition of the framework.

A framework is the application that runs your application.

It gives you a set of tools that make building a web application easier.

The exact definition of a framework depends on who you talk to, but they’re there to make your life easier.

Learn the language, then learn the framework you picked. It’s easy to use Google to find guides and tutorials for any of the languages or frameworks.

I don’t recommend either Java or Javascript for back-end beginners.

Java is a trickier language than the other four, and while it’s tempting to say node.js (JavaScript for the server) utilizes the skills you developed while learning front-end development, in practice I think node.js is less beginner-friendly than any of the 3 options above.

But that’s my opinion, if you’d like to check it out anyway, the “Hello World” example for Express, the most popular node.js back-end framework.

Conclusion:

I personally think Python has the is the best choice for beginners, and have a lot of community support.

But by the time you’re far enough along as a web developer to form opinions for yourself, you should know more than one language.

Bonus:

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees.

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

Noted software expert Robert C. Martin presents a revolutionary paradigm with Clean Code: A Handbook of Agile Software Craftsmanship.

Martin has teamed up with his colleagues from Object Mentor to distill their best agile practice of cleaning code on the fly into a book that will instill within you the values of a software craftsman and make you a better programmer, but only if you work at it.

Get your copy using the link below:

Clean Code: A Handbook of Agile Software Craftsmanship

Some related articles you might interest in :

1- The Best Way To Improve Your Programming Skill Level

2- Recommended Programming Language for Beginner To LEARN First

3- Don’t Panic This is The Best way to Learn Programming

4- 4 Great YouTube Channels, that Will Improve Your Programming Skill

4-It is Never too Late to Learn How to Program

5-The Best Advice I Wish I know When I Start Programming

Connect with me on :BlogYoutubeFacebookTwitter

Featured

The Best Way To Improve Your Programming Skill Level

I will provide you some real practical solutions to your inability to write code.

First of all, coding is like a game and anybody can learn it if they have an appetite to create something productive. 

Being a good programmer, you can actually contribute a lot to make things simpler and better.

I think there could be two possible reasons for that :

1- You do not have a basic understanding 

 I will not be able to build a website unless I have a good knowledge about basic UI technologies like HTML , JavaScript, jQuery etc.

2- You understand coding but lack practice

Coding is all about practice, the more you practice code , the better you get at it.

So As a solution, I suggest to start from basics and give yourself sometime to absorb new concepts and keep practicing each concept until you get enough confidence. 

As a Java and Javascript developer, I would suggest you follow :

Javadocs, Head First Java book, Clean Code: A Handbook of Agile Software Craftsmanship for java programming language. And Mozilla docs for Javascript.

For example, If you try to learn Java from multiple sources at the same time then you will end up spending more time in learning a concept. 

Spend enough time on covering as much basics as you can, because that’s the only thing that will speed up your coding skill once you start learning advance technologies related to Java.

Here is a list of some technologies and languages that you can adapt based on your learning needs :

  • if you want to develop a simple static website: Learn HTML5, CSS, JavaScript in order.
  • Want to build a dynamic responsive website : Learn Angular and Bootstrap for example.
  • Want to build a simple application : Learn Core Java , .net or Python
  • Want to build an enterprise application : Learn advance java ( JSP, Servlets, Struts, Spring Framework, Web services)

Conclusion:

Think big, start small.

Start practicing code daily and try to get better at it as soon as possible.

Good Luck 🙂

Bonus:

Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. 

Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way.

Noted software expert Robert C. Martin presents a revolutionary paradigm with Clean Code: A Handbook of Agile Software Craftsmanship

Martin has teamed up with his colleagues from Object Mentor to distill their best agile practice of cleaning code on the fly into a book that will instill within you the values of a software craftsman and make you a better programmer, but only if you work at it.

Get your copy using the link below:

Clean Code: A Handbook of Agile Software Craftsmanship

Some related articles you might interest in :

1- Recommended Programming Language for Beginner To LEARN First

2- Don’t Panic This is The Best way to Learn Programming

3- 4 Great YouTube Channels, that Will Improve Your Programming Skill

4-It is Never too Late to Learn How to Program

5-The Best Advice I Wish I know When I Start Programming

6–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

Featured

Recommended Programming Language for Beginner To LEARN First

Whether you’re looking to begin coding as a hobby, a new career, or just to enhance your current role, the first thing you’ll have to do is decide which programming language you want to start with.

There is no right answer, of course. 

Choosing a first language will depend on what kind of projects you want to work on, who you want to work for, or how easy you want it to be. 

Hopefully, this guide will help give you a better idea of which one you should pursue.

Python

Python is always recommended if you’re looking for an easy and even fun programming language to learn first. 

Rather than having to jump into strict syntax rules, Python reads like English and is simple to understand for someone who’s new to programming.

 This allows you to obtain a basic knowledge of coding practices without having to obsess over smaller details that are often important in other languages.

Python also is ideal for web development, graphic user interfaces (GUIs), and software development.

 In fact, it was used to build Instagram, YouTube and Spotify, so it’s clearly in demand among employers in addition to having a faster onboarding.

Though it has it’s advantages, Python is often thought of as a slow language that requires more testing and is not as practical for developing mobile apps as other languages.

JavaScript

JavaScript is another incredibly popular language. 

Many websites that you use every day rely on JavaScript including Twitter, Gmail, Spotify, Facebook, and Instagram according to General Assembly, Additionally it’s a must-have when adding interactivity to websites because it communicates with HTML and CSS. 

This makes it essential for front-end development and consumer-facing websites while becoming increasingly important in back-end development and growing in demand all the time. 

There’s nothing to install with JavaScript since it’s already built into browsers, so it’s the easiest language to get started with in terms of set-up. 

The con here is that this means it’s interpreted differently across browsers,you’ll need to do some extra cross-browser testing and may have deficiencies in responsive design compared to server-side scripts.

Again, while it’s not the most difficult to learn, it certainly isn’t as easy as Python.

Ruby

Ruby is similar to Python in that it’s one of the easiest languages for people with no prior programming experience to read. 

You don’t need to know a ton of commands or programming vocabulary to learn it, and it has a multitude of libraries and tools that come in handy.

A big reason people like Ruby is because of the awesome full-stack framework, Ruby on Rails ,which is becoming increasingly popular among startups and enterprise solutions.

 Airbnb, Groupon, Hulu, and Soundcloud are just a few of the website that were built with Ruby on Rails ,and Ruby has quite the active developer community today.

The reason it’s so popular for small businesses, however, is often one of the many criticisms against it.

 Ruby can have the challenge of scalability across a large system and may have a hard time with performance on larger websites. 

Conclusion:

While Ruby is certainly easy to learn, you’ll find most of the opportunities come from learning Ruby on Rails, which may slow down your learning curve if you were just expecting to take the easy way out to create a website.

Bonus:

I am going to recommend a great book, for people who decided to begin teaching themselves how to code.

The content is clear and concisely presented in steps that gradually build on each other in a way that allows you to follow along smoothly.

Click on the link below to get your copy :

The Self-Taught Programmer: The Definitive Guide to Programming Professionally

Some related articles you might interest in :

1- Don’t Panic This is The Best way to Learn Programming

2- 4 Great YouTube Channels, that Will Improve Your Programming Skill

3-It is Never too Late to Learn How to Program

4-The Best Advice I Wish I know When I Start Programming

5–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

Don’t Panic This is The Best way to Learn Programming

Learning programming is fun and exciting but most people who start fail along the way and never fully realize their potential. 

Before I tell how to get started, first here are 3 reasons why most wannabe programmers fail.

1. You think it’s too easy.

 Most programmers when they about learning to code imagine that it’s all about picking up some tutorials, holding yourself up for a few hours during the weekend and you come out a professional computer programmer.

 Far from the truth. You’ll need to dedicate more time consistently to learning in order to have a breakthrough.

2. You lack real focus

For most programmers, they get started learning with no particular target or goal in mind. 

That is, they have no particular project in mind or problem that they’d like to solve.

 The excitement of merely learning something new worked up in first few hours.

 So, you need a more solid drive or motivation to keep you going, a problem you’d like to solve with this skill you are learning.

3. Too high expectations.

Ok, there is a high demand for software developers.

 In fact web developers are even in higher demand. 

But wait, this doesn’t mean companies are just going to hire you for writing a script that prints “Hello World!” to the screen.

 Getting hired actual takes more time and effort in polishing up your skills & experience, not to mention that your pay will really be low at the start

You might even have to work for free just to get that experience that someone will be willing to pay for.

So how then, how do you get started learning to code?

Here is a 3 step guide that will make your life easier than you imagined.

I. Identify your field.

There are a countless number of programming languages and you couldn’t possibly learn all of them. 

Besides, there is not a one-size-fits-all programming language.

 Do you want to venture into Mobile App development, Web development, Game development or Desktop App development? 

Each has it’s best choice of tools, so it will help you settle on a programming language.

II. Learn the basics

Once you settle on your favorite field and a programming language, get started learning the basics. 

III. Build a project

After you got the basics right, quickly get onto building something real. 

Something that can be used by someone else. Not a Todo App.

 Could be a library, framework, software, plugin or package. 

This will enable you to put your skills together and call to action your problem solving skills. 

Conclusion:

It is these actual projects that you build that will get you hired. It is what will count as experience for you.

This way you can be able to get your software career launched in the fastest way possible.

Bonus:

I am going to recommend a great book, for people who decided to begin teaching themselves how to code.

The content is clear and concisely presented in steps that gradually build on each other in a way that allows you to follow along smoothly.

Click on the link below to get your copy :

The Self-Taught Programmer: The Definitive Guide to Programming Professionally

Some related articles you might interest in :

1- 4 Great YouTube Channels, that Will Improve Your Programming Skill

2-It is Never too Late to Learn How to Program

3-The Best Advice I Wish I know When I Start Programming

4–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

5-The Easy and Best Way To Learn Programming

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

4 Great YouTube Channels, that Will Improve Your Programming Skill

Programming is a field that demands to be updated, and to learn new things on a daily basis.

So, The problem is to have a great resources of information to improve your skills.

I found a lot of people asking about YouTube channels, that could help them.

In this article, I will share with your 4 YouTube channels, that help me a lot on my programming journey.

1- Programming With Mosh

This is a great youtube channel to learn how to design and program.

Mosh Hamadani is a software engineer and expert on web development. 

I love his way on teaching how to structure, manage, and understand the language, framework, patterns,…

2-Udacity

Udacity, a pioneer in online education, is building “University by Silicon Valley”, a new type of online university, that teaches the actual programming skills, that industry employers need today.

3-LevelUpTuts

This is a great resource for tutorials on PHP and WordPress.

4-freeCodeCamp.org

There are tons of 2-minute whiteboard explanations of various software engineering tools and concepts on Free Code Camp’s YouTube channel.

Conclusion:

Watching videos can trick you into feeling like you’re making progress with your programming skills, but it’s a supplement, not a substitute, for actually spending time programming.

Bonus:

I am going to recommend a great book, for people who decided to begin teaching themselves how to code.

The content is clear and concisely presented in steps that gradually build on each other in a way that allows you to follow along smoothly.

Click on the link below to get your copy :

The Self-Taught Programmer: The Definitive Guide to Programming Professionally

Some related articles you might interest in :

1-It is Never too Late to Learn How to Program

2-The Best Advice I Wish I know When I Start Programming

3–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

4-The Easy and Best Way To Learn Programming

5-Tricks to Learning Java Quickly

6- The Best Way to Learn JavaScript,and Become A Professional (video)

7-The Best and Low Cost Web Hosting To Use

8- Angular Start to Slowly Dying (video)

9- 4 Practical Books for Software Architecture (video)

10- Professional Illustrate the Specifications before Jumping to Code

11- The Design Cannot Be Taught

12- Class Diagram is The Most Popular and Complex

13- How To Be a Great Problem-Solver Software Engineer

14-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

It is Never too Late to Learn How to Program 

I have seen a lot of people in old age, wants to start learning how to program.

But, they get unmotivated by their age.

40 is a GREAT age to learn programming !

There’s multiple reasons for this. I’m assuming you’re asking for practical reasons.

Can I learn programming now and still get a job ? 

yes, you can ! 

So I’ll start with those, but then also cover some of the other benefits of learning programming, that you may not have considered.

1- Industry demand for programmers is still really high

The whole game about companies only wanting programmers under 35 really only applies heavily in the startup world (and then, only sometimes) and in Silicon Valley and perhaps maybe NYC, but to a lesser degree.

Otherwise, the demand for programmers is so much higher than the supply that I’ve seen companies just in the past 5 years, hire people with no or minimal programming background to fill the gap, mathematicians will do, is often what companies resort to.

2- 40 still is not as old as you might be thinking

Yes, neuroplasticity can tend to begin declining in your mid-thirties, but you don’t lose your ability to learn new things overnight. 

All you need is a great dose of curiosity and to enjoy tinkering. 

Step 1 to being a programmer is to sit down at your computer and start messing around with it. 

Break it, then figure out how to fix it, rinse and repeat. 

Learn how to do things from Powershell (if on Windows) or the Bash Terminal (if on Linux/Mac) and see how much you can do without relying on fancy UI’s.

Interacting with the system through the terminal is kind-of like 1st step programming, because you’re interacting with the system through written commands.

3- Neuroplasticity

The research suggests out that learning new things, help to preserve your neuroplasticity as you get older. 

Playing musical instruments and computer programming are often cited as things you could learn that accomplish this goal particularly well.

I think programming does pretty well in this regard, because it encourages continuous learning, there’s always new programming languages coming out and programming languages tend to evolve over time, and the same is true of the programming tools and frameworks you’ll use. 

So, it’s actually good to pick-up programming for your mental health, even if it’s just for a hobby.

4-Getting Better Appreciation

You’ll gain a better appreciation for and understanding of the electronic devices such as smart devices, tablet, PCs, and computers and other devices.

That will make you more aware of how much work goes into those devices and what the vulnerabilities and risks are that come with using each device.

 You will gain a lot of knowledge simply from learning to program, something like a website using a framework like spring Boot or Angular.

Conclusion:

As I said, It’s never too late to start your programming journey, you need just to have passion and patience to do it.

Good luck 🙂

Bonus:

I am going to recommend a great book, for people who decided to begin teaching themselves how to code.

The content is clear and concisely presented in steps that gradually build on each other in a way that allows you to follow along smoothly.

Click on the link below to get your copy :

The Self-Taught Programmer: The Definitive Guide to Programming Professionally

Some related articles you might interest in :

1-The Best Advice I Wish I know When I Start Programming

2–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

3-The Easy and Best Way To Learn Programming

4-Tricks to Learning Java Quickly

5- The Best Way to Learn JavaScript,and Become A Professional (video)

6-The Best and Low Cost Web Hosting To Use

7- Angular Start to Slowly Dying (video)

8- 4 Practical Books for Software Architecture (video)

9- Professional Illustrate the Specifications before Jumping to Code

10- The Design Cannot Be Taught

11- Class Diagram is The Most Popular and Complex

12- How To Be a Great Problem-Solver Software Engineer

13-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The Best Advice I Wish I know When I Start Programming

In this article I am going to share with some advices that will make you a strong programmer.

1- Practice makes perfection

I can’t elaborate more than this.

2- Don’t be disappointed if you see codes and apps that looks pretty complicated. 

Believe me, those who made such apps like Instagram, WhatsApp or Facebook have practiced decade maybe decades to make such apps.

You need time. The difference between a master and a beginner is time.

3-You need a good team 

A good team will inspire you and push you beyond your limits.

4- Write clear and readable codes 

 It will help you to read your own codes after decade and let other programmers know what you did.

5- Syntax should be written with utmost concentration

 Because,A little syntax mistake can rise lots of problem like forgetting a semi colon or writing fuction() instead of function().

6- Write meaningful and simple names

 It’s all about your variables, functions or whatever plus master reading others code. 

7-Coding is not complex

There is no such big problem in coding, there is always a set of small problems that forms that big problem.

8-Reading Books

Read a lot of books, books have always been the best resources to learn anything. 

9-You are not bored, you are just not motivated 

Don’t think you are bored while coding. the bitter truth is that you are not bored you are just not motivated.

10-Don’t always believe in Youtubers

Don’t always believe in Youtubers, saying top 10 programming language to learn in 2020 when you see the list, cobol comes first.

11-Don’t worry about the time

Don’t worry about the time to learn coding. 

It usually takes decade to master a language. Just make sure you are not going astray. 

Every good thing needs time, don’t hurry to make stuffs done overnight.

Conclusion:

As Elon Mask said :

You don’t need college to learn stuff, the value is seeing whether somebody can work hard at something

So, you have to have a plan to improve your coding skills every day, and don’t stop learning.

After years of experience you will find yourself an expert on the domain.

Bonus:

I am going to recommend a great book, for people who decided to begin teaching themselves how to code. 

The content is clear and concisely presented in steps that gradually build on each other in a way that allows you to follow along smoothly.

Click on the link below to get your copy :

The Self-Taught Programmer: The Definitive Guide to Programming Professionally

Some related articles you might interest in :

1- Digital Marketing For Busy Developer

2–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

3-The Easy and Best Way To Learn Programming

4-Tricks to Learning Java Quickly

5- The Best Way to Learn JavaScript,and Become A Professional (video)

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

Digital Marketing For Busy Developer

Digital marketing is a way to promote brands and products online and through other digital channels. 

Most developers, especially freelancers, have a specific audience they are trying to reach, and digital marketing aims to help them reach these target consumers through the internet and other digital avenues.

Website Marketing

In many ways, your profile’s website is the cornerstone of your digital marketing strategy.

This is where many of your target customers first get an impression of your brand, and more often than not, this is where your leads will eventually convert into paying customers.

So let’s talk more about how your website plays a role in how digital marketing works.

The goal of digital marketing is to attract, engage, and convert your leads. 

Many of the tactics that you will use to do this will ultimately lead your target customers back to your website to get more information or make a purchase.

Your website is sometimes your brand’s only chance to make a good first impression with consumers in your target market. 

For this reason, you should pay attention to the layout of your site as well as the colors and graphics that you use in your site design. 

In fact, according to Adobe, 38% of people will stop engaging with a website if they find the content or layout to be unattractive.

Search Engine Optimization

Search engine optimization also plays a big role in how digital marketing works. 

If you want to reach and convert consumers in the digital age, you’ll need to start with the search engines. 

A recent research study by Forrester found that 71% of consumers start their buyer’s journey on search engines like Google. 

If you are not taking the right steps to improve your site’s SEO then you may be missing out on a powerful opportunity to reach a significant amount of leads.

Search engine optimization is the process of optimizing your site’s content so that it appeals to the search engines. 

The end goal is to rank higher on the search engine results page (SERP) to increase visibility in your target market.

The higher you rank on the SERP, the more organic traffic you can drive back to your website.

Search engine optimization not only brings more traffic to your website, but it also helps ensure that the leads you are bringing in are of a higher quality.

The goal of digital marketing is to attract those who are right for your products or services, and SEO plays an important role in doing just that. 

By emphasizing certain keywords and topics within your content, you can work to reach those online who are most likely to be interested in your products or services.

Content Marketing

Content marketing is another important tactic that plays a significant role in how digital marketing works. 

Content marketing is essentially when your create and promotes certain content assets that are aimed at attracting and engaging your target customers.

 These content assets can be created for a number of different purposes, including generating brand awareness, growing site traffic, boosting leads, or retaining customers.

No matter which tactics that you use as part of your digital marketing strategy, you will need to create content to support these tactics.

 This can be something as short and simple as a thank you email to someone who has subscribed to your email list. 

Or it can be a longer, more detailed piece like an e-book, that describes and provides information about one of the biggest challenges that your target customers face.

Here are just a few types of content marketing that you might create to support your digital marketing campaign goals:

  • Website pages
  • Blog posts
  • Social media posts
  • E-books
  • White papers
  • Case Studies
  • Testimonials
  • Videos
  • Images
  • Infographics
  • Podcasts
  • Ad Content

The key to creating great content assets that help support your digital marketing campaigns is strategically choosing topics that appeal most to your audience. 

If you haven’t already, make sure that you do some target audience research and even create customer personas to ensure that you know your customers well and can identify what types of content will attract and engage them at each step in the buyer’s journey.

Social Media Marketing

Most professional developers today are using social media marketing to support their digital marketing campaigns and drive more traffic to their website. 

Social media marketing involves promoting your content and engaging with your target consumers on social media channels like Facebook, Instagram, LinkedIn, and Pinterest. 

This tactic is used in digital marketing to help developers increase brand awareness, generate more leads, and improve customer engagement.

One of the biggest appeals of social media marketing is that it allows developers to reach a wider audience online. 

For example, 79% of American internet users are active on Facebook. 

If your are not trying to reach and engage these consumers on the social platform, then you are certainly missing out on an important opportunity to reach new leads.

Social media not only works as its own tactic, but it can also support all of your other digital marketing efforts. 

For instance, if your brand develops an informative eBook that speaks to your target audience’s pain points, you can use social media to promote the eBook and drive traffic to the landing page for the download. 

You can then re-purpose pieces of the eBook for future social media posts as a way to generate further interest for the content piece.

Email Marketing

Email marketing is yet another piece of the puzzle that is how digital marketing works. 

You as a developer, can use branded emails to communicate with your target audience.

Marketing emails are often used as a way to increase brand awareness, promote events, and get the word out about special promotions.

The content of your marketing emails will ultimately depend on your campaign goals. 

Here are just a few examples of the types of email marketing content you might develop to support your digital marketing campaigns:

  • Send a welcome email when new users subscribe to your marketing email list letting them know what they can expect to see from your brand emails.
  • Deliver promotional content about upcoming sales and discounts straight to the consumer’s inbox.
  • Develop a newsletter that goes out to subscribers periodically to deliver the latest content and company updates from your business.
  • Email leads after they have downloaded content from your site to thank them for their interest and even recommend additional relevant content pieces.
  • Suggest additional products or content assets that your leads and customers may be interested based on their browsing and buying behavior.

Conclusion

It’s important to note that email marketing is mainly used not for generating new leads, but rather nurturing leads once they have shown interest. Marketing emails can also be used as part of your customer retention campaigns.

In fact, according to eMarketer, 80% of retail professionals report that email marketing is one of the best tactics for driving customer retention.

Bonus:

A great source of actionable insights for anyone working with tech communities on an everyday basis.

A must-read book for all people interested in DevRel, but also packed with ideas for marketing and product professionals.

Compelling and useful, which is a rare combination.

Click on the link below to get your copy :

The Business Value of Developer Relations: How and Why Technical Communities Are Key To Your Success 1st ed. Edition

Some related articles you might interest in :

1–4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

2-The Easy and Best Way To Learn Programming

3-Tricks to Learning Java Quickly

4- The Best Way to Learn JavaScript,and Become A Professional (video)

5-The Best and Low Cost Web Hosting To Use

6- Angular Start to Slowly Dying (video)

7- 4 Practical Books for Software Architecture (video)

8- Professional Illustrate the Specifications before Jumping to Code

9- The Design Cannot Be Taught

10- Class Diagram is The Most Popular and Complex

11- How To Be a Great Problem-Solver Software Engineer

12-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

4 PRINCIPALES To Market Yourself As A PROFESSIONAL Developer

Whenever I hear of great marketing or anything to do with marketing I usually think of one thing or rather one company: Coca-Cola.

Coca-Cola went from a cocaine-infused elixir in 1886 to a ubiquitous sugary drink by 1929.

Now people in more than 200 countries drink 1.9 billion servings every day, according to The Coca-Cola Company.

Coca-Cola used seven key design and marketing strategies, which made it as recognizable in the streets of Shanghai as in its hometown of Atlanta by the 1920s, says Coca-Cola VP of innovation and entrepreneurship David Butler.

So,how to apply coca-cola marketing principles to grow your software development carrier.

Simplicity

Despite having grown into a massive global industry with innumerable products, Coca-Cola has never strayed from its timeless and basic ideals. 

Throughout the decades and multitudes of marketing campaigns, Coca-Cola has remained consistent when communicating one strong and effective message: pleasure. 

Enduring, simple slogans such as “Enjoy” and “Happiness” never go out of style and translate easily across the globe.

How to apply it : The developer has to focus on some technologies and try to be expert on theme.

You have to define your own mission and try to respect it.

So, after years of experience and contributing on many projects, you will become a reference in the domain.

Personalization

Despite its status as a global icon, Coca-Cola understands that it has to find a way to speak to consumers at a more personal, localized level.

Initially introduced in Australia, the company’s Share a Coke campaign has now successfully expanded to over 50 countries.

Each country’s offerings are customized to its local culture and language, with the most popular names of each region printed on cans and bottles in place of the company’s moniker.

 This campaign is the perfect example of effectively applying a localized positioning strategy to a global market.

How to apply it :

You have to consider each customer as unique with custom project.

Try to understand in deep his need and expectations.

Socialization

Social media is one of the fastest-growing tools for effective international marketing, giving companies the ability to reach consumers on a worldwide level through a single platform. 

Besides being an effective localization strategy, the Share a Coke campaign also successfully utilizes social networks to engage consumers and prompt them to share their Coke experience with others. 

According to the Wall Street Journal, there were over 125,000 posts about the campaign in just one month after it launched in the United States.

How to apply it :

You have to be more present on social media, Facebook,Twitter, Instagram,…

Share updates, technics, news, etc related to your domain of expertise with the community, to boost your network.

Experience

A significant part of Coca-Cola’s success is its emphasis on brand over product. 

Coke doesn’t sell a drink in a bottle, it sells “happiness” in a bottle. 

With thousands of different products and packaging designs that vary among regions, a global marketing plan focused on the products themselves would be challenging to manage. 

Instead, Coke aims to sell consumers the experience and lifestyle associated with its brand. 

For example, Coke recently unveiled a new packaging campaign where they individualized 2 million bottle designs

AdWeek writer Tim Nudd writes :

 The resulting product conveys to ‘Diet Coke lovers that they are extraordinary, by creating unique one-of-a-kind extraordinary bottles,’ 

said Alon Zamir, vp of marketing for Coca-Cola Israel.

Though the products may vary, the experiences they are selling,happiness, friendship, are universally shared and understood.

How to apply it :

Create your own brand,and try to develop this brand and dominate you’re a category in your domain.

People buy from someone knows and trust.

Conclusion:

Being highly competent is good, but if you are competent and famous is excellent.

Bonus:
A great source of actionable insights for anyone working with tech communities on an everyday basis. 

A must-read book for all people interested in DevRel, but also packed with ideas for marketing and product professionals.

 Compelling and useful, which is a rare combination.

Click on the link below to get your copy :

The Business Value of Developer Relations: How and Why Technical Communities Are Key To Your Success 1st ed. Edition

Some related articles you might interest in :

1-The Easy and Best Way To Learn Programming

2-Tricks to Learning Java Quickly

3- The Best Way to Learn JavaScript,and Become A Professional (video)

4-The Best and Low Cost Web Hosting To Use

5- Angular Start to Slowly Dying (video)

6- 4 Practical Books for Software Architecture (video)

7- Professional Illustrate the Specifications before Jumping to Code

8- The Design Cannot Be Taught

9- Class Diagram is The Most Popular and Complex

10- How To Be a Great Problem-Solver Software Engineer

11-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The Easy and Best Way To Learn Programming

A lots of people want to know, how to learn programming from scratch.

Every day I read comments of such kind. 

For example, someone is working in different area, and wants to become software tester. Or, he/she works in IT, but wants to jump on a new level and start programming. 

All these people have one thing in common. They want to start the programming journey.

But this area is so broad, they don’t know how to start. And everyone tells other things about how to do it.

So,in this article you will learn how to start programming from scratch.

You will also receive some advice that will help you in the progress. 

So, lets begin.To learn how to code you need to follow these steps:

1. Look around

Currently programming is a very broad area. 

So it’s good to look around and see in what directions you can go.

Because if you know possible ways, you can choose the right one for you.

It also helps to choose technologies you should learn.

For starters I can tell about website programmers.

These are people, who use for example WordPress, or other content management system (CMS) and with help of their skills, they adjust website so it works like the client expects it to. 

To do it such people need to know mostly JavaScript, HTML and CSS.

Website developer, this is supposably the most popular interpretation of a programmer in the world.

But programming is not only about WordPress websites. 

It is much much more. Programming is divided into several reals.

First, i will give you three examples of frontend realms. 

Frontend programming is programming of something that the user / client sees. An interface. In short words.

Webapp programming, building business online apps (such as your bank app, or movie comparison app). These are specific applications that fill needs of specific group of users. 

They don’t serve to manage content (like WordPress) but to manage processes.

 In addition to JavaScript, HTML and CSS programmers needs to know some additional technologies for webapp programming. It depends on a project, for example: SCSS, TypeScript and any of the following frameworks: Ember, Angular, Vue or React. The Framework is a skeleton of an application with build-in features.

Desktop app programming, like above, but it is about programming applications you can install for example on a Windows system. This group differs from the previous one. It uses other technologies. For example programming for Windows can use C# language,.NET Framework, Java Framework etc.

Mobile app programming, it is about writing apps installed on mobile devices. Mainly on devices with Android and iOS operating systems. Programming for Android uses mostly Java programming language and for iOS: Swift language. For each system there are different frameworks. There are also frameworks for both systems.

These were examples of frontend reals. Now lets move on to backend realms. Backend is, more or less, programming something, client / user does not see, but is essential for the system to work:

Database programming, databases like MySQL and MSSQL. 

Database is, more or, less, an advanced spreadsheet. Database stores thus more data, and allows to manipulate it programatically. Add data, remove them, change.

Additionally it allows to set up different ways of handling data. 

For example reject incomplete data, or gather data from different tables. Database systems use their variations of SQL language (for example T-SQL for MSSQL). The language allows to code various operations on the data. There are also databases that don’t use SQL language. They use a mystical name: NoSQL.

Backend programming, backend developer processes data between database and frontend, or between different systems that need to cooperate.

Almost every programming language can be used for backend purposes. 

But since backend has specific use cases people tend to use different languages than for frontend.

For example: PHP, Ruby, Python, Java, but also JavaScipt.

Most of programmers divide into frontend and backend developers. 

But there are also fullstack developers. Fullstack developer is a person, who knows frontend and backend technologies. He/she can take care of everything to set up an application: database, backend and user interface.

But these are not all realms. And in most of such overviews other realms are omitted. 

We have more and more advanced devices, sensors, that provide data to information systems.

These need to be programmed also. This is called embedded programing. Embedded programming is done mostly in C and C++ languages, because these languages give the programmer access to low level abstraction. It saves resources of these small devices.

Another thriving realm of programming is data analysis. Developers of data analysis use large data sets in companies and prepare results for business conclusions to be made upon them.

Such programmers use different sets of languages, including Python and R.

Another realm is legacy programming. Legacy developers know technologies that were used to build systems, but are not popular anymore. They support legacy systems, that need to function, but were build years ago.

There are a lots of other programming reals. I won’t write about all of these here. But it is possible to find them.

This is the end of this point. Why it is so important? If you know what area you are interested in, you can choose technologies, you should learn. And it is easier to learn a few of technologies than 200 of these.

Lets assume you already know what area you are interested in, shall we? What is next?

2. Choose technologies

To move further you need to make an informed decision about technologies you should learn.

But honestly it is hard to get a good advice about it.

You could ask someone, or go to studies or a bootcamp.

But either way, you will end up learning: 

a) what is popular

b) what is know by the person who teaches you 

c) what uses person who answers. And often it is not a good choice. But i have good news. You can do it better and do it good.

Read about what technologies are used by companies that work in area, you are interested in. 

Search for job offers, see what technologies reoccur.

Don’t loose enthusiasm because there are 20 technologies there. 

It’s a fiction. People write things that are not required later for the job.

Often, i read that people know a little bit of this, a little bit of that. And they still don’t know how to become a programmer. 

They are lost. It’s totally true. Programming is a very broad area, and you can not learn everything. You need to choose direction and stick to it. That way you increase your chance to succeed.

3. Choose learning method

This point is very important and i will write why.

Everyone has their opinion how a developer should learn. 

If you was looking for the answer online, you will find a lots of ideas what is right.

One will advice to use YouTube videos. Other will tell book is the way to go. Other person will tell a bootcamp is the best.

Software developer learns from a documentation! Some developers say also, that college education is useless. I am sad about such statements.

You don’t know what is best. And it’s completely natural. Everyone tells different things. Personally i could recommend ebooks and video tutorials, because i like these. But this would not a proper nor helpful answer.

In reality, it does not matter how you learn. As long as you learn. 

And do you remember how did you learn at school? 

How did it look like? 

Did a teach ask you to learn a little bit of this, a little bit of that. Things that have no connection between? 

No. And in school you learn complicated stuff. Stuff that didn’t matter to you. But everyone remembered a little bit from this.

It’s because learning in schools is systematic and methodical. Step after step. The system was developed through hundreds years of experience. If you think it is not good enough: ask yourself: how to make a potato battery? 

This system is that good!

You are interested in learning programming. The best thing you can do for yourself is to choose a method that is methodical and systematical.

So lets answer to ourselves, what is the most methodical and systematical method of learning? 

Obviously it is a high school education. High school teaching staff knows methodical and systematical education very well. It will be hard, but you will learn programming.

Another way to learn are books and books. With a little bit of carefullness you will find a book that teaches stuff step by step. From easy to difficult things. From A to Z. But you need to like to read.

Video tutorials and online courses,these can be great if the author prepared it methodically, it will help you with learning. Especially if you like to watch and learn.

Bootcamp, an intensive programming training. You can also benefit from this. It is a good solution if you like to work with people in a group.

Only thing i want to make you aware of to not use non-methodical, non-systematical and incomplete ways to learn programming.

 Imagine you learned 50% of a topic from a YouTube video course (because it’s free). But there is no latter part. You search for other course. 

But in other course there are things you don’t know. Also some things you know. And topic is explained in totally different way. It is so down-spirit! 

So for starters: content that covers topic from A to Z.

Reaching soon the end of this point i wanted also to notice one issue that is extremely important. 

I read that people advice beginners to use english content to learn.

These people don’t know what they are doing. When you are a beginner,don’t make your life hard. 

If you have content in your native language,use it.

Why you should learn programming in foreign language? 

4. Set a goal

Setting goals is often omitted in planning a work career. 

People start to learn one course, than stop, then go to a bootcamp while the time inevitably passes.

After several months you don’t remember what you have learned before.

It seems you learned something, but what precisely did you accomplish?

To make learning easier it is a good idea to set a goal. 

For example you can swear you will read a 800-page long book in a month. Or, you will end up graduate studies, or find a junior developer job in 6 months.

Everything goes better, if you will set a goal.

It is something about our nature, that goals make life easier. Learning is not easy. Programming is not easy. Learning programming is off the charts.

That is why a goal is so important. And satisfaction from reaching it gives motivation for further work.

5. Learn systematically

When you already have a goal, next step is to make a learning schedule. 

Every day one hour. But every day, always one hour. And not to take another video tutorial or read an article, but to learn something meaningful.

Is there a way to make it easier? 

Mind focus is required to learn programming. A lots of mind focus. Try this out. Announce to everyone: at 6pm i will learn programming for an hour and please don’t disturb me.

Switch off the phone, log out from Facebook. Give children to parents-in-law. Give dog to a neighbour.

Observe how much you will learn! You need to cut yourself off from the world, to really grasp the understending of programming.

6. Code yourself

An example for coding plan is : Every day put up a goal to code something.

One day: simple calculator.

Next day: simple page with movie covers. 

And so on.

Obviously, it is easier when you are studying, working, or joining a bootcamp, or having a book with example tasks. 

Because you don’t need to figure out tasks on your own. This is also a good method. But sometimes such tasks are boring, don’t force you to go outside of your comfort zone. 

And what if you figure out a task by yourself? Than you will hit some obstacles, that will pinpoint what you should learn to go further with your practice.

7. Ask for help

Since we are talking about problems. It is hard to articulate with what you have problems in early programming days.

For example, when a bug occurs in an appliction.

How to translate the problem into words?

How to ask for it? 

Sometimes is so hard, people don’t know what phrase to put into a search engine. Years ago it was easier. 

There was StackOverflow and support groups, where you was able to ask any question. You could count on help from others.

Today you can face such answers like “search the internet”, or “this was asked before” or “again”. 

This is frustrating. Unfortunately. Internet is filled with answers. 

What advanced software developers don’t comprehend is that it is hard to name a problem if you are beginner programmer.

Even if they had exactly the same problem! I can assure you of this!

Thus, i strongly urge you to ask questions. Find a nice place on the internet for software developers, or find a mentor. And ask ask ask. Sometimes you will stumble upon a hesitation. But it shall not bring you down. And if so, find other place that is nicer.

The more you will ask, the easier it will become to articulate what you have problem with. 

While the time will pass, your questions will become better and better, answers will show up faster and faster. And later, you will discover, some problems can be solved just by the act of formulating a proper question.

8. Find work fast

I often read that people postpone searching for a job until they learn something more. And half of year passes by. Software development is a field where you need to renew your knowledge all the time. 

Lets say you have learned 3–4 technologies that you need.

You start to search for a job, and get into trouble. You could search a job for several months! And while doing it, you will forget half of things you have learned.

So my advice is that you should start searching for a job more or less in the halfway of your learning path.

For several good reasons. First of all if you take an intern position, or junior job, your employer assumes you know almost nothing.

Secondly, recruting process takes time itself. Before you will fine tune your CV, before you will learn premade, available online, test questions time will pass. 

It will pass while you will sharpen your recrutating talk without jitters. Finding a job is also something you need to learn.

So in halfway of the learning path start to looking around for a job. Send CVs, schedule meetings, get used to it. Than your technical readiness will meet with recrutating readiness and there won’t be any lags.

9. Ask for more complicated tasks

Now you have a work position. But the biggest trap of programming is ahead of you. Since you have a job, you can put books on shelves and just do what you are asked to be done.

It is not like this! Be aware, that often interns/juniors are not asked to do complicated tasks, but very easy. You will notice soon, these take less and less time to be completed, and start to become boring. If you won’t do anything about it, you loose time. You can learn new things. Not run around in circles.

Above of that, your employer sees when intern/junior stands still. I am am employer, it is as clear as the sky. And no one wants to have timeless internet/junior.

So, when you see, you are getting better, ask your supervisor for more complicated tasks. He/she should know to give you something more complex. 

This is important to continue learning and developing yourself while at work.

As an intern/junior you need to learn a lots, a lots of more. I am 100% sure you don’t want to become one of these guys on the internet that say intern/junior position didn’t give them anything, learned nothing. 

They didn’t learn, because they didn’t want to go further. They waste their and their employer time.

10. Master new technologies

Programming is an awesome field, but it changes all the time. 

What you have learned today, will become outdated in one year. And obsolete in 5. 

When you will find a job, and feel comfortable with it, master something new. Expand your skills. Don’t stand still with your professional development. If you quit studies, maybe it’s worth to go back. Attend a bootcamp, read a book, participate in an online course. Whatever that will allow you to be up to date with technologies.

Conclusion:

It is all you need to learn to start programming from scratch. 

This was a really long article, and i am really amazed you have reached the end of it. 

I am sure 99% of people didn’t do it. It looks like you care about becoming a software developer. Caring is 99% of a success.

I wish you all the luck. Programming is awesome and gives a lots of professional satisfaction. Don’t loose your heart for it. Go on. Step by step.

Bonus:

I am gonna give a tool to help you listen to video tutorials or course,with a very good sound.

This is Creative Pebble 2.0 USB-Powered Desktop Speakers with Far-Field Drivers and Passive Radiators for PCs and Laptops (White)

Inspired by the zen Japanese rock garden, the orb-shaped Creative Pebble is a sleek and elegant 2.0 speaker system that looks perfect in any home and office.
It features a 45° elevated sound stage for enhanced audio projection and is powered by a single USB cable.

It will help you to get better understand of the video courses and boost your career.

Some related articles you might interest in :

1-Tricks to Learning Java Quickly

2- The Best Way to Learn JavaScript,and Become A Professional (video)

3-The Best and Low Cost Web Hosting To Use

4- Angular Start to Slowly Dying (video)

5- 4 Practical Books for Software Architecture (video)

6- Professional Illustrate the Specifications before Jumping to Code

7- The Design Cannot Be Taught

8Class Diagram is The Most Popular and Complex

9- How To Be a Great Problem-Solver Software Engineer

10-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

Tricks to Learning Java Quickly

In this article I will mention some important tricks, that will help you grow as a Java developer and gain more knowledge about the language.

1- Get the basics right

As Java offers so many features and options to the developers, people are sometimes lured into learning too many things in too little time. 

As a result of this, they get bits and pieces knowledge of a few options that Java offers, but their basics hang on a loose thread.

Java is one programming language which is easy if you have paid attention to the simple basics, however; it can be frustrating if you get greedy and try to take the shorter route forward.

2. Don’t just read

Well, if your sole purpose of learning Java is to clear the exam you have the next day, go ahead and mug up all the things that you can and you might just get the passing marks. 

However, if you are really serious about learning Java and getting better at it, the best way to do it is not by reading, but by implementing.

Gain knowledge and then execute what you have learnt, in the form of code. 

You can never learn Java if you are not willing to get your hands dirty.

3. Understand your code and algorithm

Even if you are writing a simple code having a if else statement, as a beginner, start by realizing the code on a piece of paper.

The algorithm and the whole compiler process will look so meaningful once you understand the idea behind the code.

Even for experts, the best way to solve a complex problem or to formulate an algorithm to solve a Java program is to break the problem into sub-parts and then try to devise a solution for each sub part.

 When you start getting the solutions right, you will get the confidence to work more.

4. Do not forget to allocate memory

This trick is particularly useful for those who switch from C, C++ to Java. 

The memory allocation in Java using the new keyword is a necessity as Java is a dynamic programming language.

C, C++ does not explicitly have this feature, therefore you must take care while handling array and object declaration in Java. 

Not using the new keyword will show a null pointer exception in the code.

5. Avoid creating useless objects

When you create an object in Java, you use up memory and processor speed from the system. 

Since object creation is incomplete without allocating memory to it, it is better to keep the object requirements under check and not create unwanted objects in the code.

6. Interface is better than Abstract class

There is no multiple inheritance in Java, and this will be spoon fed to you so many times while learning the language that you will probably never forget it for the rest of your life. 

However, the trick here in not to remember that there is no multiple inheritance in Java, but the fact that interface will come in handy if you want to implement something like multiple inheritance without using the extends keyword.

Remember, in Java, when nothing goes your way, you will always have interface by your side. 

Abstract class does not always give programmers the liberty of having a variety of methods to work with, however; interface only have abstract methods. 

Therefore, is does the job of abstract classes and has other advantages as well.

7. Standard library is a bliss

The biggest advantage that Java has over its predecessors, from a programming point of view, is probably its rich set of standard library methods. 

Using Java’s standard library makes the job of a programmer easy, more efficient and gives a well organised flow to the code. 

Further, operations can be performed easily on the methods specified in the library.

8. Prefer Primitive classes over Wrapper Class

Wrapper classes are no doubt, of great utility, but they are often slower than primitive classes. 

Primitive class only has values while the wrapper class stores information about the entire class.

Further, since wrapper classes often deal with object values, comparing them like the primitive classes does not give desired results as it ends up comparing objects instead of values stored in it.

Example:

int number_1 = 10;

int number_2 = 10;

Integer wraperNum_1 = new Integer(10);

Integer wraperNum_2 = new Integer(10);

System.out.println(number_1 == number_2);

System.out.println(wraperNum_1 == wraperNum_2);

In the above example, the second print statement will not display true, because wrapper class objects are getting compared and not their values.

9. Dealing with strings

Since Object oriented programming classifies String as a class, a simple concatenation of two strings might result into the creation of a new string object in Java. 

Which eventually affects the memory and speed of the system.

 It is always better to instantiate a string object directly, without using the constructor for this purpose.

Example:

String slowIntiat = new String ("The slow instantiation"); //slow instantiation

String fastIntiat = "This string is faster"; //fast instantiation

10. Coding, Coding, Coding

There is so much to learn about Java that you just cannot get over with this programming language and it keeps getting more interesting and amusing, however; it is important to maintain the interest within to learn and the hunger to get better.

Conclusion:

Java programming language can be learnt on our own and with great success, but the only thing that is required is continuous learning and coding to test what you have learnt.

Java is a lot like playing a sport, the more you sweat in the practice, less you bleed in the match.

Bonus:

I am gonna give a tool to help you listen to video tutorials or course,with a very good sound.

This is :

Creative Pebble 2.0 USB-Powered Desktop Speakers with Far-Field Drivers and Passive Radiators for PCs and Laptops (White)

Inspired by the zen Japanese rock garden, the orb-shaped Creative Pebble is a sleek and elegant 2.0 speaker system that looks perfect in any home and office. 


It features a 45° elevated sound stage for enhanced audio projection and is powered by a single USB cable.

It will help you to get better understand of the video courses and boost your career.

Some related articles you might interest in :

1- The Best Way to Learn JavaScript,and Become A Professional (video)

2-The Best and Low Cost Web Hosting To Use

3- Angular Start to Slowly Dying (video)

4- 4 Practical Books for Software Architecture (video)

5- Professional Illustrate the Specifications before Jumping to Code

6- The Design Cannot Be Taught

7- Class Diagram is The Most Popular and Complex

8- How To Be a Great Problem-Solver Software Engineer

9-The Key to Becoming a Professional Software Engineer

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The Best Way to Learn JavaScript,and Become A Professional

This is how I learned JavaScript(JS) back in the day, and I hope it’ll help you as well

1- JavaScript Basics

So this is the most fundamental and most important part and first step of path to JS learning. 

Start with how to add JS to html and what’s the difference between async and defer ? and then move to variable, data types, loops and conditionals, functions, anonymous functions, closures, arrays and associative arrays, events, regular expressions, promises, in the exact order. 

Maybe, I have missed a few topics in here, but you’ll stumble on them along the way any way. 

Also, learn how to debug with Chrome DevTools, because Chrome DevTool is the best in the business.

2- Object-Oriented Programming 

After you have the strong foundation of the JavaScript Basics, perhaps you should move to OOP concept.

I would say OOP is the most important concept in the JS or any other programming language. 

OOP in JS is based on prototype inheritance chain thingy , unlike object or class inheritance in Java or C++. 

Perhaps, move to meta programming after OOP, it’s not the most important part of JS, but its nice to know, because believe me, you don’t want Javascript to surprise you.

3- Testing and QA

Testing your code is as important as debugging.

You might have heard of words like TDD or BDD.

 TDD(Test driven development) is the concept of programming, where you write automated failing tests, before you write your actual codes.

I know it sounds all weird, but believe me you’ll prefer TDD development over BDD ( Behaviour driven development). 

BDD is the contrary to TDD.

 When you’ll be working with big projects, testing is really important for your code, to work the way you expect it to.

It gives you that satisfaction and sense of security.

One of my favourite tool to test JS is Jasmine . Its really simple and most effective way to test your codes.

4- jQuery

jQuery is the library for JavaScript to make everything more dynamic ad interactive , I really mean everything. 

It’s really fun to use and play around, and it only takes like few lines of code to execute a code written in jQuery.

jQuery interacts with DOM and Css. 

In the jquery itself, you’ll encounter how to integrate AJAX with jQuery.

Have you ever seen something like this in the browser, where you get the content you requested, without refreshing the web page? 

Yes that’s AJAX.

AJAX stands for Asynchronous JS and XML and it handles the requests asynchronously to your HTML and CSS.

Now that you’ve learned the JS basics, jQuery, Chrome DevTools, Testing, QA, and AJAX. So, you could call yourself a proper JS developer.

Now its time to move forward to frameworks, which gets the job done, and are not only most demanding in 2020, they are the languages with the highest pay as well. 

Here are frameworks you should really, really learn in my opinion.

5- React

Formerly known as React.js, React, along with Angular, is the most demanding language of 2020. 

React was initially developed by facebook and few individuals and a small community. 

But over the time, React had lots of changes and now changes are paying off.

React lets you develop UIs in the small chunks of a web page called components.

React is really easy to learn and yet really strong and fun to code. 

I would highly recommend learning React.

If you decided to be a React developer, you should also consider learning React panels, React servers and React native.

6- Angular 

Angular is pretty much like React in few ways, but Angular lets you code SPAs(single page applications) in the form of components.

It’s the most demanding language in the front-end developing side of IT industry. 

Yes, you read it correct.

Angular is the JS framework, which basically means it’s written in JS.

Since you can found JS everywhere in every browser , that’s makes it really powerful and pretty useful as well. 

I would highly recommend learn both React as well as Angular.

If you want to continue your journey with JS , there are other frameworks like Ember, Backbone, Knockoutjs, Vue.js, Chart.js etc. 

Its really crazy that JS also has a framework to deal with data science and its called D3.js. \

If you want to learn any of them, just go ahead and learn it.

You might or might not find it shocking and fascinating, that JS has its own framework to write JS on server side just like PHP or ASP .NET.

NodeJS 

NodeJS is the framework to write JS on the server-side. 

Believe me or not , but the code or APIs written in NodeJS is ridiculously fast and can handle multiple requests at the same time, unlike other back-end languages out there.

Few companies have already implemented, many of their servers on NodeJS. 

It’s the future of server side languages, but i would not call it present, because there are few issues with scalability and deployment of big web apps implemented in Node.

ExpressJS

ExpressJS is the framework of server side NodeJS, which is also written in JS.

If you are learning or want to learn Nodejs, be sure to master ExpressJS as well. Because it solves all the issues with NodeJS.

It’s a powerful framework as well.

This is the last section of learning path of JavaScript, but its not the least. 

There will be a time when you would want to write very less code, and legible codes.

But this small library, I would not say call it library,that is really really fun to learn and use.

CoffeeScript

CoffeeScript is the equivalent to JS but without any clutter and hassle of semi-colons or brackets or double quotes , even curly braces. 

I would really recommend it learning.

It’s really easy and even easier to write.

Conclusion

This is all might sound like a lot, but don’t worry it’s really not. 

When you’ll get comfy in JS , there is no limitations to learn, and be an expert in JS. 

As a programmer, you are going to keep learning throughout your life, and be inspired about what do you do.

Some related articles you might interest in :

1-The Best and Low Cost Web Hosting To Use

2- Angular Start to Slowly Dying (video)

3- 4 Practical Books for Software Architecture (video)

4- Professional Illustrate the Specifications before Jumping to Code

5- The Design Cannot Be Taught

6- Class Diagram is The Most Popular and Complex

7- How To Be a Great Problem-Solver Software Engineer

8-The Key to Becoming a Professional Software Engineer

As a bonus, I recommend Pro Angular 9: Build Powerful and Dynamic Web Apps book to master angular.

Connect with me on :Blog, Youtube, Facebook, Twitter

Featured

The Best and Low Cost Web Hosting To Use

I will explain precisely how to choose a web host and point out which key indicators you need to be mindful of. 

I will give you 1 web hosting to go on.

Here are eight important points that you, as a private person or company, should consider when choosing a web host.

1- Space

A web site that is primarily text-based does not require much space, but it often requires quite a lot of space if it contains rich media. 

Find out what it will cost if you need to scale it in the future.

Sometimes it can be profitable to pay for a larger space than what your site initially demands.

2- Traffic

You’ll be consuming bandwidth if your website contains a decent number of pictures and videos. 

Choose a web host that allows a large amount of traffic.

Otherwise, find out how much it costs to scale on demand.

3- Speed

Web hosting response time and transaction speed are essential.

Response time is the time it takes for the visitor to reach the server, and transaction speed is the time it takes to download the page in the browser.

Internet users rarely like slow web pages.

 The website risks losing many visitors unless the web host has a good response time and a fast transaction speed.

4- Reliability

Operational safety is essential, more so for an e-commerce site. 

Reliability can be measured in up-time, which is the time as a percentage that the service is in operation for a period of time. 

No web host has 100% up-time.

Sometimes the server has to be restarted. 

It can suffer from overload, power failure, and more. 

The lower the up-time, the higher the loss of revenue because the website is not available at a downtime.

5- Support

Web hosting support is critical.

Ideally, the web host should have both e-mail and phone support. 

E-mail contact is usually sufficient, but phone support must be available and efficient if the server suddenly goes down. 

One tip is to test the support before deciding on the web host, such as asking questions via e-mail and phone.

6- Technology

Ensure that the web host you choose supports the software you are working with, such as WordPress, Joomla, and MySQL.

7- E-mail

Choose a web host where you have access to many e-mail addresses.

Having unique e-mail addresses gives a professional impression even for small businesses.

8- Price

Price may be important, but keep in mind that cheap is not always good.

When choosing a web host, you should observe the above points and then weigh the price.

When comparing prices, find out if they include or exclude taxes.

So,I will recommend 1 web hosting that is low price and respect these criteria I mentioned above which is webhostingpad.

RELIABLE WEB HOSTING

70% off hosting & a free domain

  • FREE SSL Certificate
  • FREE Easy-Install WordPress
  • 30-Day Money Back Guarantee*
  • 99.9% Uptime Guarantee*
  • Hassle-Free Unlimited Hosting

Starting at just $1.99 / month

Go get your webhosting from webhostingpad now, you will have an excellent experience 🙂

Conclusion:

The Best hosting services are the ones that lie in the narrow intersection of ‘Best’ and ‘Affordable’ hosting.

 If you’re looking for the best hosting service, you have to ask yourself: 

What does that mean ?

Whatever is ‘Best’ will largely depend on your taste. 

I would argue that the ‘best’ hosting packages are the ones that offer the most points, I have mentioned, for reasonable cost.

Remarque :

Connect with me on :Blog, Youtube, Facebook, Twitter to get new updates.

Featured

Angular Start to Slowly Dying

Most developers didn’t want to admit to himself: 

“I have learned React, now stop the time and let no better technology render my knowledge obsolete!”

 Luckily, history of technology proved this kind of opinions just a minor setbacks.

In my humble opinion React is awesome, much better than AngularJS (v1), but Angular 2+ is where the future of front end development is.

 It embodies good practices of software development and it will die, like any other technology, when it becomes obsolete, and that’s going to be after React if its developments don’t react and continue going forward. 

1- Angular v1 and Angular v2 are two different gigantic species
Inventing new better ways of doing things is not re-inventing the wheel.

It’s called progress.

 Angular v1 and v2 are different worlds, v1 is the past of the frontend development, v2+ beginning of the future.

2- Locking developers into a boxed solution
This is not true !

Anyone can publish Angular modules publicly and anyone can add them to their projects. 

 This is first real framework for JS, all others, including React are technically just libraries.

3- Two-way data binding was a very bad bug of 2013, React helped us realize that!

Misinformation !

Angular 2+ doesn’t come with built-in two-way data binding.

 There’s only one directive that implements it, and it’s a very handy feature, not a bug. 

It works like charm.

4- Types, decorators, and reactive, observable state management, all this shit is very hard for a beginner!

I agree, also building advanced web application is not for a beginner. 

These things are stuff you might have to learn if you plan to change that beginner status.

 And it’s not that hard really, to be honest.

 It’s not that tough, and when you learn it, you will realize how powerful those things are, how they help you write well formed application, and how they help your colleagues understand what you actually wrote. 

Typescript is making JS a beautiful and serious programming language. 

RxJS (Observables) are just awesome. 

5- In simple terms React, Vue are all working on powering JS, on the other hand, Angular 2 continues to put Javascript(JS) into HTML. React puts HTML into JS.

Misinformation !

 You don’t put no JS in HTML in Angular.

 It’s keeping them separated. 

Keeping presentation away from logic has always been a good practice not only in front-end but in software development in general. 

There’s nothing wrong with HTML, React renders it too you know. 

Angular’s templating system also follows good practices proven in different templating engines.

 It’s simple and powerful.

6- You have to take a few days to learn our gigantic framework.

Like any other useful knowledge you posses, you have to gain it. 

Don’t be afraid of it, it’s not that difficult really. 

It took me personally more time to understand React.

7- We’re Google remember?

I don’t know how’s this relevant for your point.

 In the context of question Is Angular dying?, this is a huge argument that it won’t die that easily when supported by single most powerful corp in web industry. 

Don’t forget Angular is meant to be used with TS, which is a product of Microsoft. 

I’m personally not very fond of big corporations, but questioning their engineering authority is just plain ignorance.

8- A lot of people choose Angular because they hear it’s a tool created by Google, it must therefore be the framework that is predominantly used at Google for front-end applications? They would be wrong in that assumption. In truth, Google trusts Closure for most of the applications you know and love including Search, News, Gmail, etc.

Angular 2 was released on 14.09.2016. 

Switching to brand new technology for Google’s production applications is, I imagine, huge and extremely expensive job.

9- The Battle Is Over: React Won!

In technology development, battle is never over.

It’s very useful to know both and make informed decisions instead of what you’re doing here. 

There are many types of projects where React would be more appropriate, and vice versa.

10- It is often claimed that Angular is better for the enterprise and this is this framework’s alleged sweet spot. However, this claim is not backed up by actual production results or history itself. There are few enterprise Angular 2/4 results in production today.

Again, Angular 2 was released on 14.09.2016.

 It’s new technology and still behind React when we talk about community and number of experts. 

Being that Angular is opinionated, more strict, and it enforces good practices of application architecture and development, it is in fact much more practical for big projects, working in team, documenting, unit testing when appropriate and similar. 

In huge applications it even shows better performance than React.

11- There are no examples of some company moving a large codebase from React/Redux to Angular 2/4 and seeing productivity, maintainability or performance gains.

This expensive operation would make sense only if React was dying, and it’s definitely not.

 It’s still very stable and powerful library.

12- Angular is used minimally in production at Google. Google’s web properties depend on quick download and execution and that is something Angular does not shine in

Again, it’s young framework, I don’t have any inside information from Google, but I believe it is being used more in time and they stand strong behind it. 

Quick download have next to nothing to do with JS framework application uses. 

Angular’s execution time is very good, and it’s getting even better with new updates.

Conclusion

I think you have to learn both React and Angular and make informed decision, don’t let your friends or your family miss on great things based on opinions of lazy whining haters on the internet, hating stuff for all the wrong reasons and ignorance.

Some related articles you might interest in :

1- 4 Practical Books for Software Architecture (video)

2- Professional Illustrate the Specifications before Jumping to Code

3- The Design Cannot Be Taught

4- Class Diagram is The Most Popular and Complex

5- How To Be a Great Problem-Solver Software Engineer

6-The Key to Becoming a Professional Software Engineer

As a bonus I recommend Pro Angular 9: Build Powerful and Dynamic Web Apps book to master angular.

Connect with me on :Blog, Youtube, Facebook, Twitter.

Featured

4 Practical Books for Software Architecture

A lot of senior developers, who aspire to become software architect or solution architect, like what can they do to become a software architect? Which books, resources, or certifications can help ? 

And how much experience you need to become a software architect etc. 

So, I am suggesting some books to read to expand their knowledge base and look at the software from architecture and design perspective, and this article is a compilation of many of such suggestions.

 Since a lot of books can confuse, I have only select 4 best and must-read books from the software architect’s perspective.

1- The Architecture of Open Source Applications

In this book, the authors of dozen open source applications explain how their software is structured, and why ?

What are each program’s major components? 

How do they interact ? 

And what did their builders learn during their development? 

In answering these questions, the contributors to these books provide unique insights into how they think.

If you are a junior developer, and want to learn how your more experienced colleagues think, these books are the place to start.

 If you are an intermediate or senior developer, and want to see how your peers have solved hard design problems, these books can help you too.

2- 97 Things Every Software Architect Should Know: Collective Wisdom from the Experts

In this technical book, today’s leading software architects present valuable principles on key development issues that go way beyond technology. 

More than four dozen architects, including Neal Ford, Michael Nygard, and Bill de hora, offer advice for communicating with stakeholders, eliminating complexity, empowering developers, and many more practical lessons they’ve learned from years of experience.

Among the 97 principles in this book, you’ll find useful advice such as: 

Don’t Put Your Resume Ahead of the Requirements (Nitin Borwankar) Chances Are, 

Your Biggest Problem Isn’t Technical (Mark Ramm) Communication Is King.

Clarity and Leadership, Its Humble Servants (Mark Richards), 

Simplicity Before Generality, Use Before Reuse (Kevlin Henney), 

For the End User, the Interface Is the System (Vinayak Hegde), 

It’s Never Too Early to Think About Performance (Rebecca Parsons),

To be successful as a software architect, you need to master both business and technology. 

This book tells you what top software architects think is important and how they approach a project. 

If you want to enhance your career, 97 Things Every Software Architect Should Know is essential reading.

3- Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design

What are the ingredients of robust, elegant, flexible, and maintainable software architecture? 

Beautiful Architecture answers this question through a collection of intriguing essays from more than a dozen of today’s leading software designers and architects. 

In each essay, contributors present a notable software architecture, and analyze what makes it innovative and ideal for its purpose.

4-The Design of Design: Essays from a Computer Scientist

Effective design is at the heart of everything from software development to engineering to architecture. 

But what do we really know about the design process? What leads to effective, elegant designs? 

The Design of Design addresses these questions.

Conclusion:

That’s all about some of the best books for a software architect, technical leads, and solution architect

If you want to move the next step in your career towards an end goal of becoming a software architect, these are the books to read to expand your vision and knowledge.

Some related articles you might interest in :

1- Professional Illustrate the Specifications before Jumping to Code

2- The Design Cannot Be Taught

3- Class Diagram is The Most Popular and Complex

4- How To Be a Great Problem-Solver Software Engineer

5-The Key to Becoming a Professional Software Engineer

Featured

The Design Cannot Be Taught

Design is all about making decisions.Generally trading off among nonfunctional criteria.

Various sources can form this decision,such as the customer,the end user,technology specifications,and competitors products.

Sometimes,However,a more detailed analysis is required.

Example of such devices include simulations, prototypes, and design studies.

In this article I will focus on design studies.

When an architect design a building, often one of the early steps is to undertake a design step.

This takes the form of series of scale models where different approaches are explored in order to get a better feel for the design space, which is the range of possibilities available as solutions.

The same approach is used in other areas of design, such as cars, planes and even clothing.

A design study is a rigorous and systematic evaluation of the factors that influence a design.

It should begin with determination of relevant criteria,metrics, and threshold.

How they are measured ?

What measurement values are deemed satisfactory ?

The study itself consists of a comparison of the various possible approaches in each approach is measured against the predetermined criteria.

The process of doing a design study helps the designer explore a space of possibilities.

“The design can’t be taught “

But,what can be taught is the surrounding skills such as analysis, modeling and evaluation.

Instead, the design must be learned.Learned by doing.

You can think of a design study as an empirical scientific experiment. As such the research questions, subjects of study, experimental conditions, methods, tools, metrics, independent variables, data collection,statistical analysis and conclusions.

The overall goal of a design is repeatability. That is ,someone else should be able to take your study report,use it to recreate the study conditions,and reach the same conclusions that you did.

There are 9 steps for a design study :

1- Context

It provides background and motivation for the study.So,the reader who is not familiar with class or the project can make sense of what you have written.

It should also define any specialized vocabulary necessary for the reader to understand.

2- Research questions

The design study examines the trade-offs between various non-functional requirements.

For example,space and time.

Each trade-off can be expressed in the form of a question,such as :

How are execution times and memory footprint effected as the amount of pre-processing computations vary ?

Each question should be formulated in a neutral fashion.

3- Subject

A design study compares multiple subjects.

Each subject, should be briefly described differentiating it from the other subjects.

4- Experimental Conditions

A software design study normally means running several versions of a program, making measurements and evaluating the results.

These programs running on computers configured with resources.

Such as their number of cores, the amount of RAM, their clock speed and potentially the networking that networks them together.

Experimental conditions describes also the environment in which the study will take place.

Such as the machines, their models, operating systems, programming languages, any virtual machines and their versions…

5- Variables

The design studies themselves have to be designed. In particular the independent variables must be identified and appropriate metrics specified.

Design studies like experiments, allow designers, like scientifics, to alter conditions and note results.

It describes variables, both independent and dependent variables, the units of measure and how the research questions address them.

6- Method

This includes the number of trials, measurement devices and tools, randomization technique were appropriate, and number of significant digits you used in your measurements and so on.

This should also include an explicit subject will be run, and the argument used for each trials.

For example: if you were studying the relationship of performance to grid size, you would want to specify what different grid sizes you will be using.

The method should also describe any statistical technique you will use, for example linear regression

7- Results

The point of conducting a design study is to produce data.

It represents the data collected and their statistical analysis.

8- Discussion

The opportunity to interpret the data you collected and provide a discussion of its implication.

This often means offering an explanation for of any unexpected values you see.

Also, allows you to reflect on the experimentation itself.Including any suggestions,any suggested further work or for improving the study process itself.

9- Conclusions

Its about summarizing the result and draw conclusions.

It provides explicit answers, to the research questions, you raised in the second step.

Conclusion

As I said, the design cannot be taught, you have to learn it.And I want you to learn it by projects.

I encourage you to invest energy and time in the projects and to think systematically about the design issues that each one of them raises.

Express that systematic thinking in the form of some experiments that you run,then write up those experiments in the form of a report.

I think by doing this it will force you to reflect upon the design process, and thereby, make it much more real for you.

And you, what do you think about the design process ?

As a bonus from this article, I want to share with you some books,that changed my life in the domain of software design engineering:

1- Clean Code: A Handbook of Agile Software Craftsmanship

2- Clean Architecture: A Craftsman’s Guide to Software Structure and Design (Robert C. Martin Series)

3-The Pragmatic Programmer: your journey to mastery, 20th Anniversary Edition

Featured

Class Diagram is The Most Popular and Complex

In this article,I am going to talk about how to express the results of problem understating using Unified Modeling Language (UML).

In particular UML’s Class Model Diagram, which is the most popular form of the UML.

As I have talked in the previous article about OOA(Object Oriented Analysis),the process by which you can begin to understand the problem you are trying to solve.

Besides,class diagram is popular, it’s the most complex of the diagramming types in UML.

UML class diagram is called also static Structure Diagrams.

Classes have features.by that I mean its attributes and its operations.

Classes exist in the real world,features exist inside the computer.

Example of classes:

Example of Class Diagram

I want to mention here for reservation class,that responsibilities and exceptions are not part of the UML.

They are here just to show you what the boxes are being used for.

There are some additional advanced features of class models.

For that I’d like to mention interfaces,parameterized classes,nested classes,composite objects.

If you are familiar with an object-oriented language like java,you know that you can express in your program a type by using the interface construct within java.

In the interface description you typically describe what that interface provides to the rest of the system and what it requires from the rest of the system.

Parameterized classes correspond to java generics or c++ templates.that is, they provide a way of, for example describing collection classes by giving a parameter that is a type of the class.

Example you have a set of vehicles,you have a set of bank accounts.

Thirdly,Nested classes,if you are familiar with java class definition,you can have other classes inside a class.

They are sometime called nested classes or inner classes.

Finally,you can have composite objects.These are objects that contain other objects within them.

In the OOA we saw that nouns could give us a good lead into what the classes are going to be.

Similarly,verbs can be used for several purposes one of which is to describe what the relationships are between classes.

In the UML there are three kinds of relationships,their association,example association between people and vehicule,people can drive vehicules.

There is generalization,that is a car kind of vehicule. And there is a dependencies.

Their might be dependency between cars and pollution laws.

If a pollution laws changes,car might have to be adapted.For example putting on some kind of pollution control device.

For Association,there are a lot of notation affordances:

  • Name
  • Association classes
  • Aggregation and composition
  • Generalization
  • Navigability
  • Multiplicity
  • Role names
  • Qualifier
  • Link
  • Constraints

Association class

You can think of it as an association that has some class properties or some association properties:

Class Association Example

As you might see in the example above, that their is an other association for Job,it is a recursive association.

It is better to use role name for this kind of association.

Aggregation and Composition

Composition and Aggregation Example

This is about one class related to many other classes.

Aggregation doesn’t really say much about semantic of the relationships.in particular it doesn’t say much about the lifetimes of the participant objects.

For example,lets say you have a house class and room class.

Clearly a house has rooms so you would expect there to be an aggregation there.But,further if you destroy the house you also destroy the rooms.

Therefor instead of using aggregation, we would use composition.

In the composition, there is a responsibility of managing the lifetime of the constituent objects.

That further says, that a particular constituent can only belong to one composition.

Compositions, also have the transitive property.That is, a house can have room and room can have closets.

For aggregations there is no rules like this.Aggregations are general situations.

I might say, for example, that a room has a table.

Now,this is an aggregation situation,because we certainly destroy the room after we take the table out.

They don’t have the same lifetime. Therefore we’d use aggregation instead of composition.

Qualifiers

They are indicated with small rectangles, that are on the sides or edges of class rectangles.

The small rectangles contain the name of one the attributes, of that particular class.

The attribute within the small rectangle, is the qualifier, that can provide access to, instances of that particular class.

If you were doing a relational database model you would think of the qualifier as the key, into the set of the instances.

Links

Just like classes can have instances, associations can have links.

Example:

If we had the situation where a company hires people,we might have a situation Facebook hires Jack.Facebook hires Sara.Google hires Tom and Google hires Sara.

Sara has two jobs.In this situation we would have four different links.

Generalization

It is also indicated by a solid line, but in this case the line ends with a triangle.

The semantic is that all instances of the subclass are also instances of the parent class.That is a subset of relationship.

Generalization is not as Inheritance.Inheritance is an implementation technique,generalization is modeling approach.

In UML,generalization supports both multiple parent classes for a given class and multiple child classes.

You can specify discriminators. That is names of groups of subclasses.

Conclusion:

UML provides a rich vocabulary for modeling a system structure.and the UML class model diagram exibits many, many different features.

However,there is no need for you to use all of its affordances.

Particularly at start of modeling process.

Nevertheless,each affordance implies a question to be answered.

What is the multiplicity ?

Are these values ordered ?

What’s the qualifier ?

Does the system, that you are modeling, exibit the property expressed by that affordance ?

One of the important benefits of modeling ,is that it encourages you to face these questions early,in the development process.

Because,if you forget and they may later come back to haunt you.

As a bonus from this article, I want to share with you some books,that changed my life in the domain of software design engineering:

1- Clean Code: A Handbook of Agile Software Craftsmanship

2- Clean Architecture: A Craftsman’s Guide to Software Structure and Design (Robert C. Martin Series)

3-The Pragmatic Programmer: your journey to mastery, 20th Anniversary Edition

And you, what do you think about class diagram ?

Featured

How To Be a Great Problem-Solver Software Engineer

Before you can solve a problem,you need to understand it.The process of understanding a problem is called Analysis.

The most important analysis type is Object-Oriented Analysis (OOA).

OOA is a requirements analysis techniques developed by Abbot and Booch in the 1980s.

It concentrates on modeling the real-world objects based on their desicriptions in natural language to produce an object analysis model.

Structured analysis and design techniques are functionally oriented.

They concentrate on the computations that need to be made.the data upon which the functions operate are secondary to the functions themselves.

Object Oriented Analysis and design on the other hand is primarily concerned with the data objects.These are defined first in terms of their attributes and data types, and defined and associated with a specific objects.

First, it takes a textual description of system to be build, such as a requirements document. And looks for different kinds of words such as nouns, verbs, and objectives.

Nouns correspond to classes, actions verbs to operations. Adjectives to attributes and stative verbs to relationships.

The resulting class model can be reviewed with a customer for accuracy and completeness.

Here is an overview of the steps involved:

1- Candidate object classes are indicated by occurrence of nouns in the natural language description of the system to be build.

2- The nouns can then be organized into related groups termed classes.

3- The next step looks for adjectives, which often indicates properties that can be modeled as attributes.

4- Subsequently, action verbs can be modeled as operations and assigned to an appropriate provider class.

5- Other stative verbs are indicative of relationships among classes

Actually, the OOA techniques are the following:

1- Obtain and prepare textual description of the problem

2- Underline all the nouns

3- Organize the nouns into groups to become candidate classes

4- Underline all the adjectives

5- Assign the adjectives as attributes of the candidate classes

6- Underline the verbs, differentiating action from stative verbs

7- Assign action verbs as operations of classes

8- Assign stative verbs as attributes of classes or relationships

It sounds simple isn’t it ?

Here are some of the issues that arise when we try to accomplish the first step in OOA.

1- Some words are duplicated

2- Some words share the same stem

Some words are close to each other and really share the same underlying concept like leaf and leaves.

In these cases we are going to do what’s called stemming.

The stemming removes the prefixes and postfixes, the suffixes to the words, and just uses the root word as the corresponding candidate and class.

Conclusion

Like any analyses process, the conclusions that we reach are always tentative.as we engage in the process, we learn more about the problem, which my lead to revisions of our analysis.

In fact,it was one of the early lessons of software engineering, is that requirements documents are always wrong.

In the sense that they’re incomplete, or inconsistent, or they don’t truly reflect what it is that the customer ultimately wants. And as an analyst it’s our job to elicit that correct description.

Thanks for Reading !

If you like my work and like to support me …

  • Follow me on twitter here
  • Follow me on facebook here
  • Subscribe on my Youtube channel,I share lots of amazing content like this but in video
Featured

The Key to Becoming a Professional Software Engineer

The process of building a program while satisfying a problem’s functional requirements and not violating its non-functional constraints,is called the software design.

The design is the most creative part of the software development process.

It’s broken normally into two main parts,architecture design and detail design.

Architectural design

The process of identifying and assigning the responsibility for aspects of behavior to various modules or components of a software.

Detail design

It is about dealing with every particular component.

Also, it is the process of specifying the behavior of each of the system components that you have identified during the architectural design.

it includes data structures and algorithms.

One of the best statement I like the most about the software design,is the one from Wasserman:

“The primary activity during data design is to select logical representations of data objects identified during the requirements definition and specification phase.

The selection process may involve algorithmic analysis of alternative structures in order to determine the most efficient design or may simply involve the use of a set of modules that provide the operations upon some representation of an object.”

There are many approaches to software design,some intended for how best to structure a system, such as Object oriented design.

Some of them are intended for particular class of application.That is the design of real time systems.

Some of them are structured to deal with certain kind of application, such as user interface design.

All aspects to the design may include three aspects that may be compared, the design method, the design representation, and how that design is going to be validated.

Design method

A method is a systematic sequence of steps that a software engineer or designer uses to solve a problem.

It suggests a particular way of reviewing a problem.

The design representation

The representation is independent of the programming language used in software implementation and covers certain basic concepts of software design: control flow, data flow and data abstraction.

The design validation

It is the process of checking whether the specification captures the customer’s needs.

The bottom line is you can’t design a complex system without having some idea of what is supposed to do.

That’s why we use analysis Model,which is concerned with the problem being solved and design is concerned with the solution to that problem.

Diagram can help express that understanding and express your solution to that problem.

Conclusion:

UML(The Unified Modeling Language) provides a wealth of diagram types for you as well as OCL(Object Constraint Language) and meta model.

In general, the more precisely you understand the problem the fewer subsequent problems you will have with that system’s history.

Thanks for Reading !

If you like my work and like to support me …

  • Follow me on twitter here
  • Follow me on facebook here
  • Subscribe on my Youtube channel,I share lots of amazing content like this but in video
Featured

How To Visualize COVID-19 in An Effective Way

In this article I am going to show you how to visualize covid-19 pandemic using python libraries.

At the end of this article you will get this result:

Static Choropleth Maps

The libraries used for making this output are:

Numpy

It is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.

Pandas

pandas is a software library written for the Python programming language for data manipulation and analysis.

Plotly

The Plotly Python library is an interactive, open-source plotting library that supports over 40 unique chart types covering a wide range of statistical, financial, geographic, scientific, and 3-dimensional use-cases.

Built on top of the Plotly JavaScript library.

Plotly Express

It is a new high-level Python visualization library. it’s a wrapper for Plotly.py that exposes a simple syntax for complex charts.

Graph_objs

This package imports definitions for all of Plotly’s graph objects.

The reason for the package graph_objs and the module graph_objs is to provide a clearer API for users.

In the source code below, I focused on recovered cases for just one day, which at 9/04/2020.

# Import libraries
import numpy as np 
import pandas as pd 
import plotly as py
import plotly.express as px
import plotly.graph_objs as go
from plotly.subplots import make_subplots
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# Load data frame and tidy it
df = pd.read_csv('time_series_covid_19_recovered.csv')
#convert the column '4/9/2020' type to string
df['4/9/2020'].apply(str)
# Rename columns
df = df.rename(columns={'Country/Region':'Country'})
df = df.rename(columns={'4/9/2020':'Date'})
# Create the Choropleth
fig = go.Figure(data=go.Choropleth(
    locations=df['Country'], # Spatial coordinates
    z = df['Date'], # Data to be color-coded
    locationmode = 'country names', # set of locations match entries in `locations`
    colorscale = 'Viridis',
    marker_line_color = 'black',
    marker_line_width = 0.5,
))
fig.update_layout(
    title_text = 'Covid-19 recovered cases for the day 09/4/2020',
    title_x = 0.5,
    geo=dict(
        showframe = False,
        showcoastlines = False,
        projection_type = 'equirectangular'
    )
)
fig.show()

To get a dynamic choropleth maps,you should make the update mentioned in the code below:

# Import libraries
import numpy as np 
import pandas as pd 
import plotly as py
import plotly.express as px
import plotly.graph_objs as go
from plotly.subplots import make_subplots
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# Load data frame and tidy it
df = pd.read_csv('time_series_covid_19_recovered.csv')
#convert the column '4/9/2020' type to string
df['4/9/2020'].apply(str)
# Rename columns
df = df.rename(columns={'Country/Region':'Country'})
df = df.rename(columns={'4/9/2020':'Date'})
# Creating the visualization 
#start of the update 
fig = px.choropleth(df, 
                    locations="Country", 
                    locationmode = "country names",
                    color="Date", 
                    hover_name="Country", 
                    animation_frame="Date"
                   )
#end of the update
fig.update_layout(
    title_text = 'Covid-19 recovered cases for the day 09/4/2020',
    title_x = 0.5,
    geo=dict(
        showframe = False,
        showcoastlines = False,
    ))
    
fig.show()

The output:

Dynamic Choropleth Map

Conclusion:

A choropleth maps displays divided geographical areas or regions that are coloured in relation to a numeric variable.

It allows to study how a variable evolutes along a territory.

It is a powerful and widely used data visualization technique.

However, its downside is that regions with bigger sizes tend to have a bigger weight in the map interpretation, which includes a bias.

Resources

https://www.kaggle.com/vignesh1694/covid19-coronavirus

Featured

An Insight to Data Mining Algorithms

One of the most instructive lessons is that simple ideas often work very well, and I strongly recommend the adoption of a simplicity-first methodology when analyzing practical datasets.

There are many different kinds of simple structure that datasets can exhibit.

In one dataset, there might be a single attribute that does all the work and the others may be irrelevant or redundant.

Inferring rudimentary rules

In any event, it is always a good plan to try the simplest things first.

The idea is this:

we make rules that test a single attribute and branch accordingly.

Each branch corresponds to a different value of the attribute.

It is obvious what is the best classification to give each branch: use the class that occurs most often in the training data.

Missing values and numeric attributes

Although a very rudimentary learning method, 1R does accommodate both missing values and numeric attributes.

It deals with these in simple but effective ways.

Missing is treated as just another attribute value.

So that, for example,if the weather data had contained missing values for the outlook attribute, a rule set formed on outlook would specify four possible class values, one each for sunny, overcast, and rainy and a fourth for missing.

Statistical modeling

The 1R method uses a single attribute as the basis for its decisions and chooses the one that works best.

Another simple technique is to use all attributes and allow them to make contributions to the decision that are equally important and independent of one another.

Constructing decision trees

Decision tree algorithms are based on a divide-and-conquer approach to the classification problem.

They work from the top down, seeking at each stage an attribute to split on that best separates the classes; then recursively processing the sub-problems that result from the split.

Conclusion

This strategy generates a decision tree, which can if necessary be converted into a set of classification rules — although if it is to produce effective rules, the conversion is not trivial.

Featured

5 Main Types of Knowledge Representation in Machine Learning

There are many different ways for representing the patterns that can be discovered by machine learning, and each one dictates the kind of technique that can be used to infer that output structure from data.

Once you understand how the output is represented, you have come a long way toward understanding how it can be generated.

In this article,I talk about main types of representation: 

Decision tables

Decision Table Example

The simplest, most rudimentary way of representing the output from machine learning is to make it just the same as the input.

Decision trees

Decision Tree Example

A divide-and-conquer approach to the problem of learning from a set of independent instances, leads naturally to a style of representation called a decision tree.

Classification rules

classification rules example 

Classification rules are a popular alternative to decision trees.

The antecedent, or precondition, of a rule is a series of tests just like the tests at nodes in decision trees, and the consequent,or conclusion, gives the class or classes that apply to instances covered by that rule, or perhaps gives a probability distribution over the classes.

Association rules

Association rules are really no different from classification rules except that they can predict any attribute, not just the class, and this gives them the freedom to predict combinations of attributes too.

To reduce the number of rules that are produced, in cases where several rules are related it makes sense to present only the strongest one to the user.

For example, with the weather data,we can extract this rule:

If temperature = cool then humidity = normal

Rules with exceptions

Returning to classification rules, a natural extension is to allow them to have exceptions.

Then incremental modifications can be made to a rule set by expressing exceptions to existing rules rather than reengineering the entire set.

Instead of changing the tests in the existing rules, an expert might be consulted to explain why the new instance violates them, receiving explanations that could be used to extend the relevant rules only.

Clusters

Clusters Example

When clusters rather than a classifier is learned, the output takes the form of a diagram that shows how the instances fall into clusters.

In the simplest case this involves associating a cluster number with each instance, which might be depicted by laying the instances out in two dimensions and partitioning the space to show each cluster.

Conclusion

Knowledge representation is a key topic in classical artificial intelligence and is well represented by a comprehensive series of papers edited by Brachman and Levesque.

I mentioned the problem of dealing with conflict among different rules.

Various ways of doing this, called conflict resolution strategies, have been developed for use with rule-based programming systems. 

These are described in books on rule-based programming, such as that by Brownstown.

Featured

Data Mining : The impact of The Input

I think with any software system, understanding what the inputs and outputs are is far more important than knowing what goes on in between, and Data Mining is no exception.

I think with any software system, understanding what the inputs and outputs are is far more important than knowing what goes on in between, and Data Mining is no exception.

The input takes the form of concepts, instances, and attributes.

So,in this article I explain these terms and I talk about preparing data. 

What’s a concept ?

Photo by NeONBRAND on Unsplash

Four basically different styles of learning appear in data mining applications. 

In classification learning, the learning scheme is presented with a set of classified examples from which it is expected to learn a way of classifying unseen examples.

In association learning, any association among features is sought, not just ones that predict a particular class value.

 In clustering, groups of examples that belong together are sought. In numeric prediction, the outcome to be predicted is not a discrete class but a numeric quantity. 

Regardless of the type of learning involved, we call the thing to be learned the concept and the output produced by a learning scheme the concept description.

What’s in an example?

Photo by Markus Spiske on Unsplash

The input to a machine learning scheme is a set of instances.

 These instances are the things that are to be classified, associated, or clustered. 

Although until now we have called them examples, henceforth we will use the more specific term instances to refer to the input. 

Each instance is an individual, independent example of the concept to be learned.

What’s in an attribute?

Photo by Ali Hegazy on Unsplash

Each individual, independent instance that provides the input to machine learning, is characterized by its values on a fixed, predefined set of features or attributes. 

The instances are the rows of the tables.

Preparing the input

Photo by Markus Spiske on Unsplash

Preparing input for a data mining investigation usually consumes the bulk of the effort invested in the entire data mining process. 

Although this article is not really about the problems of data preparation, I want to give you a feeling for the issues involved so that you can appreciate the complexities.

Bitter experience shows that real data is often of disappointingly low in quality, and careful checking,a process that has become known as data cleaning, pays off many times over.

Gathering the data together

Integrating data from different sources usually presents many challenges,not deep issues of principle but nasty realities of practice.

Different departments will use different styles of record keeping, different conventions, different time periods, different degrees of data aggregation, different primary keys, and will have different kinds of error.

 The data must be assembled, integrated, and cleaned up.

The idea of company wide database integration is known as data warehousing. 

Data warehouses provide a single consistent point of access to corporate or organizational data, transcending departmental divisions. 

They are the place where old data is published in a way that can be used to inform business decisions.

Missing values

Most datasets encountered in practice, contain missing values.

Missing values are frequently indicated by out of-range entries, perhaps a negative number (e.g., -1) in a numeric field that is normally only positive or a 0 in a numeric field that can never normally be 0.

For nominal attributes, missing values may be indicated by blanks or dashes.

Sometimes different kinds of missing values are distinguished (e.g., unknown vs. unrecorded vs. irrelevant values) and perhaps represented by different negative integers (-1, -2, etc.).

Inaccurate values

Data goes stale. Many items change as circumstances change.

 For example, items in mailing lists : names, addresses, telephone numbers, and so on.Change frequently.

 You need to consider whether the data you are mining is still current.

Conclusion:

Data cleaning is a time-consuming and labor-intensive procedure but one that is absolutely necessary for successful data mining. 

With a large dataset, people often give up.

How can they possibly check it all ?

 Instead, you should sample a few instances and examine them carefully. 

You’ll be surprised at what you find. Time looking at your data is always well spent.

Featured

Data Mining is an Effective Tool for Decision-Making

Problem with actual real-life datasets is that they are often proprietary.No one is going to share their customer and product choice database with you so that you can understand the details of their data mining application and how it works. Corporate data is a valuable asset, one whose value has increased enormously with the development of data mining techniques such as those will be described in this article.

So,in this article I will go through some example of data mining applications.

First Example:

It is about the weather problem: 

Weather Dataset Example

By analyzing this table,we can extract the patterns below:

If outlook = sunny and humidity = high then play = no

If outlook = rainy and windy = true then play = no

If outlook = overcast then play = yes

If humidity = normal then play = yes

If none of the above then play = yes

These rules are meant to be interpreted in order: the first one, then if it doesn’t apply the second, and so on. A set of rules that are intended to be interpreted in sequence is called a decision list. the rules correctly classify all of the examples in the table, whereas taken individually, out of context, some of the rules are incorrect.

 For example, the rule :

 if humidity = normal then play = yes 

gets one of the examples wrong (check which one).The meaning of a set of rules depends on how it is interpreted.

Not surprisingly ! The rules we have seen so far are classification rules: they predict the classification of the example in terms of whether to play or not. It is equally possible to disregard the classification and just look for any rules that strongly associate different attribute values. These are called association rules. Many association rules can be derived from the weather data in Table above. Some good ones are as follows:

If temperature = cool then humidity = normal

If humidity = normal and windy = false then play = yes

If outlook = sunny and play = no then humidity = high

If windy = false and play = no then outlook = sunny

and humidity = high.

Second Example : Irises, A classic numeric dataset

The dataset is provided by kaggle website.Thanks for checking the link below.

Iris Flower Dataset
Iris flower data set used for multi-class classification.www.kaggle.com

The iris dataset, which dates back to seminal work by the eminent statistician R.A. Fisher in the mid-1930s and is arguably the most famous dataset used in data mining, contains 50 examples each of three types of plant: Iris setosa, Iris versicolor, and Iris virginica. 

There are four attributes: sepal length, sepal width, petal length, and petal width (all measured in centimeters).

Unlike previous datasets, all attributes have values that are numeric.

The following set of rules might be learned from this dataset:

If petal length < 2.45 then Iris setosa

If sepal width < 2.10 then Iris versicolor

If sepal width < 2.45 and petal length < 4.55 then Iris versicolor

If sepal width < 2.95 and petal width < 1.35 then Iris versicolor

If petal length ≥ 2.45 and petal length < 4.45 then Iris versicolor

If sepal length ≥ 5.85 and petal length < 4.75 then Iris versicolor

If sepal width < 2.55 and petal length < 4.95 and petal width < 1.55 

then Iris versicolor

If petal length ≥ 2.45 and petal length < 4.95 and

petal width < 1.55 then Iris versicolor

If sepal length ≥ 6.55 and petal length < 5.05 then Iris versicolor

If sepal width < 2.75 and petal width < 1.65 and

sepal length < 6.05 then Iris versicolor

If sepal length ≥ 5.85 and sepal length < 5.95 and

petal length < 4.85 then Iris versicolor

If petal length ≥ 5.15 then Iris virginica

If petal width ≥ 1.85 then Iris virginica

If petal width ≥ 1.75 and sepal width < 3.05 then Iris virginica

If petal length ≥ 4.95 and petal width < 1.55 then Iris virginica

Third Example, Loan company

The illustrations that follow tend to stress the use of learning in performance situations, in which the emphasis is on ability to perform well on new examples.

When you apply for a loan,for example, you have to fill out a questionnaire that asks for relevant financial and personal information. This information is used by the loan company as the basis for its decision as to whether to lend you money. Such decisions are typically made in two stages. 

First, statistical methods are used to determine clear acceptand reject cases. The remaining borderline cases are more difficult and call for human judgment.

For example, one loan company uses a statistical decision procedure to calculate a numeric parameter based on the information supplied in the questionnaire. Applicants are accepted if this parameter exceeds a preset threshold and rejected if it falls below a second threshold. This accounts for 90% of cases, and the remaining 10% are referred to loan officers for a decision. On examining historical data on whether applicants did indeed repay their loans, however, it turned out that half of the borderline applicants who were granted loans actually defaulted. Although it would be tempting simply to deny credit to borderline customers, credit industry professionals pointed out that if only their repayment future could be reliably determined.

 It is precisely these customers whose business should be wooed.They tend to be active customers of a credit institution because their finances remain in a chronically volatile condition. A suitable compromise must be reached between the viewpoint of a company accountant, who dislikes bad debt, and that of a sales executive, who dislikes turning business away.

Featured

Introduction to Data Mining

We are overwhelmed with data. The amount of data in the world, in our lives,seems to go on and on increasing, and there’s no end in sight. Omnipresent personal computers make it too easy to save things that previously we would have trashed. Inexpensive multi-gigabyte disks make it too easy to postpone decisions about what to do with all this stuff we simply buy another disk and keep it all.

The World Wide Web overwhelms us with information. Meanwhile, every choice we make is recorded.And all these are just personal choices: they have countless counterparts in the world of commerce and industry.We would all testify to the growing gap between the generation of data and our understanding of it.

As the volume of data increases,inexorably, the proportion of it that people understand decreases, alarmingly. Lying hidden in all this data is information, potentially useful information, that is rarely made explicit or taken advantage of.

People have been seeking patterns in data since human life began. Hunters seek patterns in animal migration behavior, farmers seek patterns in crop growth, politicians seek patterns in voter opinion, and lovers seek patterns in their partners’ responses. A scientist’s job is to make sense of data,to discover the patterns that govern how the physical world works and encapsulate them in theories that can be used for predicting what will happen in new situations.The entrepreneur’s job is to identify opportunities, that is, patterns in behavior that can be turned into a profitable business, and exploit them.

Economists, statisticians, forecasters, and communication engineers have long worked with the idea that patterns in data can be sought automatically, identified, validated, and used for prediction.

As the world grows in complexity, overwhelming us with the data it generates, data mining becomes our only hope for elucidating the patterns that underlie it. Intelligently analyzed data is a valuable resource. It can lead to new insights and, in commercial settings, to competitive advantages.

Data mining is about solving problems by analyzing data already present in Databases.

A database of customer choices, along with customer profiles, holds the key to this problem. Patterns of behavior of former customers can be analyzed to identify distinguishing characteristics of those likely to switch products and those likely to remain loyal. Once such characteristics are found, they can be put to work to identify present customers who are likely to jump ship. This group can be targeted for special treatment,treatment too costly to apply to the customer base as a whole. More positively, the same techniques can be used to identify customers who might be attracted to another service the enterprise provides, one they are not presently enjoying, to target them for special offers that promote this service.

In today’s highly competitive, customer-centered, service-oriented economy, data is the raw material that fuels business growth, if only it can be mined.

How are the patterns expressed ? Useful patterns allow us to make nontrivial predictions on new data. There are two extremes for the expression of a pattern:

as a black coffer whose innards are effectively incomprehensible and as a transparent coffer whose construction reveals the structure of the pattern.

Both, we are assuming, make good predictions.The difference is whether or not the patterns that are mined are represented in terms of a structure that can be examined, reasoned about, and used to inform future decisions.

 Such patterns we call structural because they capture the decision structure in an explicit way. In other words, they help to explain something about the data.

Structural patterns

The rules do not really generalize from the data.They merely summarize it. In most learning situations, the set of examples given as input is far from complete, and part of the job is to generalize to other, new examples.

Real-life datasets invariably contain examples in which the values of some features, for some reason or other, are unknown.For example, measurements were not taken or were lost.

Machine learning

Earlier we defined data mining operationally as the process of discovering patterns, automatically or semi-automatically, in large quantities of data and the patterns must be useful. An operational definition can be formulated in the same way for learning.

Things learn when they change their behavior in a way that makes them perform better in the future.

This ties learning to performance rather than knowledge. You can test learning by observing the behavior and comparing it with past behavior. This is a much more objective kind of definition and appears to be far more satisfactory.

Data mining

Data mining is a practical topic and involves learning in a practical, not a theoretical, sense.

We are interested in techniques for finding and describing structural patterns in data as a tool for helping to explain that data and make predictions from it.

We are interested in techniques for finding and describing structural patterns in data as a tool for helping to explain that data and make predictions from it. The data will take the form of a set of examples.

Examples of customers who have switched loyalties, for instance, or situations in which certain kinds of contact lenses can be prescribed. The output takes the form of predictions about new examples.A prediction of whether a particular customer will switch or a prediction of what kind of lens will be prescribed under given circumstances.

People frequently use data mining to gain knowledge, not just predictions. Gaining knowledge from data certainly sounds like a good idea if you can do it.

As a conclusion,To know more about data mining,I have made a video defining data mining,you will get a useful information from it 🙂

Featured

Optimize,Manage,And Deploy The ML Model In An Effective Way

Reducing the training data will eventually reduce accuracy. Finding the right balance is a trade-off decision, and you can use the sensitivity analysis to help you choose the most efficient point along the curves.

Sensitivity analysis is a financial model that determines how target variables are affected based on changes in other variables known as input variables.

Optimizing model size for devices involves performing a sensitivity analysis for the critical parameter(s) of the chosen algorithm. Create models to observe their size, and then choose tangent points along the sensitivity curves for the optimum tradeoff. Machine Learning (ML) environments like Weka make it easy to experiment with parameters to optimize your models.

One of the huge advantages of Deep Learning(DL) algorithms is that generally, their size does not scale linearly with the size of the dataset, as was the case for the Random forest algorithm. DL algorithms such as CNN and RNN algorithms use hidden layers. As the dataset grows in size, the number of hidden layers does not. DL models get smarter without growing proportionally in size.

Model Version Control

Once created, you should treat your ML models as valuable assets. Although you did not write code in the creation process, you should consider them as code equivalents when managing them. This infers that ML models be placed under version control in a similar manner as your application source code.

Whether or not you store the actual model, a serialized Java object in the case of Weka’s model export, depends on if the model is reproducible deterministically. Ideally,you should be able to reproduce any of your models from the input components,including:

  • Dataset
  • Input configuration including filters or preprocessing
  • Algorithm selection
  • Algorithm parameters

For deterministic models that are reproducible, it is not necessary to store the model itself. Instead, you can just choose to store the input components. When creation times are long, such as with the KNN algorithm for large datasets, it can make sense to store the model itself, along with the input components.

The following tools are free and open source, and promise to allow you to seamlessly deploy and manage models in a scalable, reliable, and cost-optimized way:

https://dataversioncontrol.com

https://datmo.com

These tools support integration with the cloud providers such as AWS and GCP. They solve the version control problem by guaranteeing reproducibility for all of your model-based assets.

Updating Models

One of the key aspects to consider when you begin to deploy your ML app is how you are going to update the model in the future. One of the solutions is to simply load the ML model directly from the project’s asset directory when the app starts.

This is the easiest approach when starting with ML application development, but it is the least flexible when it comes time to upgrade your application-model combination in the future.

A more flexible architecture is to abstract the model from the app. This provides the opportunity to update the model in the future without the need to rebuild the application.

Conclusion:

The best practices for creating and handling prebuilt models for on-device ML applications:

  • Optimal model size depends on the input dataset size, attribute complexity, and target device hardware capabilities.
  • Prepare a model sensitivity analysis plotting model accuracy vs model size.
Featured

4 Critical factors for Machine Learning Model

In Machine Learning application development, the model is one of your key assets. You must carefully consider how to handle the model, because it can grow to be very large, and you need to start by making sure the models you create can physically reside on your target device.

In this article I am going to develop four factors,you must consider in the model integration phase.These factors are training time, test time, accuracy and size.

Model training time

Training time is important.However, when you are deploying static models within applications at the edge, the priority is low because you can always apply more resources, potentially even in the cloud, to train the model.

Model test time

If an algorithm produces a complex model requiring relatively long testing times, this could result in latency or performance issues on the device when making predictions.

Model accuracy

Model accuracy must be sufficient to produce results required by your well-defined problem.

Model size

When deploying pre-trained Machine Learning models onto devices, the size of the model must be consistent with the memory and processing resources of the target device.

To understand how the factors interrelate, you can perform a sensitivity analysis,which is a financial model that determines how target variables are affected based on changes in other variables known as input variables.

Consider the Random Forest algorithm. You know the number of iterations i, is a key variable for determining how deep or how many trees the algorithm produces. More iterations means more trees, which results in each of the following:

  • Higher degree of accuracy
  • Longer creation time
  •  Larger model size
Featured

Monetizing your application with ML

It is amazing how many apps are available on the app stores today. In fact, there are so many, it has become difficult to cut through the noise and establish a presence. A small percentage of apps on the app stores today use Machine Learning (ML), but this is changing.

Machine learning is the future of app development.You must learn to design ML performance into the app, including considerations for model size, model accuracy, and prediction latency.

These final two ML-Gates (Model Integration/Deployment) represent the “business end” of the ML development pipeline. They represent the final steps in the pipeline where you realize the benefit of all the hard work performed in the earlier phases when you were working with data,algorithms, and models. Model integration and deployment are the most visible stages,the stages that enable you to monetize your applications.

Managing Models

In ML application development, the model is one of your key assets. You must carefully consider how to handle the model, including

  • Model sizing considerations
  • Model version control
  • Updating models

Models can grow to be very large, and you need to start by making sure the models you create can physically reside on your target device.

Device Constraints

When you use ML models from the cloud providers, you simply rely on network connectivity and a cloud provider API to access models and make predictions. Storing prebuilt models on devices is a different approach, requiring you to understand the limitations of the target device.

It is common on Android devices to see applications with sizes greater than 300 MB. This does not mean you should create models with sizes to match. Huge models are difficult to manage. The primary downside of huge models is the time it takes to load them. with Android, the best approach is to load models on a background thread, and you would like the loading operation to be complete within a few seconds.

Model accuracy, model training, and model testing times varied for each of the classification algorithms discussed in this article. There is an additional factor, model size, which is equally important to consider.

Featured

Weka-Explorer, How powerful it is !

Weka (Waikato Environment for Knowledge Analysis), developed at the University of Waikato, New Zealand. It is free software licensed under the GNU General Public License.The Explorer is the main Weka interface. The figure below shows the Weka Explorer.

Weka-Explorer, How powerful it is !

Across the top of the Explorer, you will see tabs for each of the key steps you need to accomplish during the model creation phase:

Preprocess: Filter is the word used by Weka for its set of data preprocessing routines. You apply filters to your data to prepare it for classification or clustering.

Classify: The Classify tab allows you to select a classification algorithm, adjust the parameters, and train a classifier that can be used later for predictions.

Cluster: The Cluster tab allows you to select a clustering algorithm, adjust its parameters, and cluster an unlabeled dataset.

Attributes: The Attributes tab allows you to select the best attributes for prediction.

Visualize: The Visualize tab provides a visualization of the dataset. A matrix of visualizations in the form of 2D plots represents each pair of attributes.

Weka Filters 

Within Weka, you have an additional set of internal filters you can use to prepare your data for model building Weka, like all good ML environments, it contains a wealth of Java classes for data preprocessing. If you do not find the filter you need, you can modify an existing Weka filter Java code to create your own custom filter.

Weka Explorer Key Options 

Explorer is where the magic happens. You use the Explorer to classify or cluster. Note that the Classify and Cluster tabs are disabled in the Weka Explorer until you have opened a dataset using the Preprocess tab. Within the Classify and Cluster tabs at the top of the Weka Explorer are three important configuration sections you will frequently use in Weka:

  • Algorithm options
  •  Test options
  • Attribute predictor selection (label) for classification 

 There is a lot more to learn about Explorer module than what I have covered in this article. But you have already know enough to be able to analyze your data using preprocessing, classification, clustering, and association with WEKA-EXPLORER module. 

 If you plan to do any complicated data analysis, which require software flexibility, I recommend you to use WEKA’s Simple CLI interface. You have few new tools, but practice makes it perfect.

 Good luck with your data analysis 🙂

Featured

Introduction to machine learning with WEKA !

For a very good start in machine learning world

Weka is a comprehensive suite of Java class libraries. The Weka package implements many state-of-the-art machine learning and data mining algorithms.

In this article I will talk on the most important weka modules,as follows:

Explorer

Explorer is an environment for exploring data with Weka. Explorer is Weka’s main graphical user interface. The Weka Explorer includes the main Weka packages and a Visualization tool. Weka main features include filters, classifiers, clusters, associations, and attribute selections.

Experimenter

Weka Experimenter is an environment for performing experiments and conducting statistical tests between learning schemes.

KnowledgeFlow

Weka KnowledgeFlow is an environment that supports the same functions as Explorer, but contains a drag-and-drop interface.

Workbench

Weka Workbench is an all-in-one application that combines the other within user-selectable perspectives.

Simple CLI

The Weka team recommends the CLI for in-depth usage of Weka. Most of the key functions are available from the GUI interfaces, but one advantage of the CLI is that is requires far less memory. If you find yourself running into Out Of Memory errors, the CLI interface is a possible solution.

There is some redundancy in the Weka modules. You are going to focus on the following three Weka modules because they are more than sufficient to create the models you need for your Java applications,those three modules are :

  • Weka KnowledgeFlow
  • Weka Explorer
  • Weka Simple CLI

I have excluded the Experimenter and the Workbench.you will use the KnowledgeFlow module to compare multiple ROC curves of different algorithms. ROC curves is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.

The Experimenter could do this as well, but even though Weka does not have the best graphical interface, I prefer the graphical approach of the KnowledgeFlow module to the Experimenter. You can use the Workbench module if you are seeking a customized perspective for the Weka modules.

The Weka team does provide official documentation in the form of PDF file distributed with each release, and the University of Waikato has many videos and support resources for developers who want to learn Weka. The Weka manuals are 340+ pages and are essential reading if you wish to get serious about Weka.

As a conclusion, I would present the official Weka documentation from the Weka creators:

  • Weka manual: The Weka manual for the current release (such as WekaManual-3–8–2.pdf and WekaManual-3–9–2.pdf) is always included within the distribution. For any particular Weka release, the manual filename is WekaManual.pdf.
  • Weka book: The Weka team has published a book, Data Mining -Practical Machine Learning Tools and Techniques, written by Witten, Frank, and Hall. The book is a very good ML reference book.While it does not cover Weka in detail, it does cover many aspects of data, algorithms, and general ML theory.
  • YouTube: The Weka YouTube channel, WekaMOOC, contains many useful Weka how-to videos.
Featured

Solution for running machine learning on the Edge

deploying your model on device

One of your main goals is to apply ML solutions at the edge. This requires you to produce lightweight models that you can deploy into portable devices, such as mobile phones. Java ML (Machine Learning) environments meet these requirements.

Java ML environments check all the boxes:

  • They are free and open source.
  • You can easily produce lightweight models.
  • You can run Java ML environments on the desktop or in the cloud if higher compute resources are required.
  • It is easy to export the model for use in mobile devices or small computer form factors such as the Raspberry Pi.

In effect, the Java ML environment acts like a piece of middleware in your ML pipeline.Models created by the ML environment connect the input data with the user application.

There are several factors to consider in choosing the best Java ML environment.

The factors include:

License and commercial terms: You should favor free open sources packages that allow you to create models you can use for commercial applications.

Availability of algorithms: You should look for packages that support the seven most important algorithms.

Ongoing support: You should look for a community of users or a long-term commitment by the creators.

Portability of models: You should look for the ability to export models so Java clients in any device can use the models you create. This helps you to achieve ML at the edge.

Flexibility: Java continues to grow with each major release. You need a Java-based ML environment that can grow with the language.

Perhaps in the future, we will see ML features directly included with Java, much the same way that JSON and other features are now candidates for inclusion.

For Most important ML environment using Java language, we have:

Weka

Weka is an abbreviation for Waikato Environment for Knowledge Analysis. The University of Waikato, in New Zealand, created Weka. Interestingly, Weka is also the name of a flightless bird in New Zealand (Gallirallus Australis), hence the logo.

Weka, the ML environment, has been around a while. Ported to Java in 1997, it has been a mainstay in the data mining industry. In 2005, Weka received the Data Mining and Knowledge Discovery Service Award from ACM at the SIGKDD Conference. The decision to migrate Weka to Java has allowed it to stay relevant.

All of the important CML(Classic Machine Learning) algorithms are available for Weka. Weka has a friendly license, the GNU General Public License (GPL). Therefore, it is possible to study how the algorithms work and to modify them.

The Weka GUI looks dated. The Weka GUIs and visualization tools are not nearly as slick as RapidMiner. However, under the hood, it lacks nothing. Weka is a very capable ML environment that can deliver the models your ML apps require. Despite its inferior GUI relative to RapidMiner, Weka checks all of the boxes.

RapidMiner

RapidMiner is an incredible ML environment. RapidMiner is a leader in data science platforms. Java-based RapidMiner excels at the following:

• RapidMiner is lightning fast.

• RapidMiner has many tools.

• RapidMiner is excellent at preparing data.

• RapidMiner allow you to build predictive ML models.

In terms of flexibility, both Weka and RapidMiner provide jar file libraries that you can integrate into your Java projects. This allows you to leverage prebuilt models in your Java applications.

KNIME

Like RapidMiner, KNIME was included a leader among the data science platforms some key selling points for KNIME:

  • KNIME is a toolbox for data scientists.
  • KNIME contains over 2,000 modules.
  • KNIME is an open platform.
  • KNIME can run locally, on the server, or in the cloud, which is the kind of flexibility you seek.

ELKI

ELKI is a Java platform that excels at clustering and outlier detection. While Weka and RapidMiner are general frameworks, ELKI does one thing and one thing well: clustering.

If the basic clustering algorithms contained in the general frameworks are not sufficient for your ML clustering problem,ELKI probably is the solution.

ELKI has a research and education focus. It has helped to solve real-world clustering problems such as clustering the positions of whales and rebalancing public bike share programs.

One of the unique features of ELKI is the use of SVG for scalable graphics output and Apache Batik for rendering of the user interface. If you need lossless, high quality,scalable graphics output for your clustering problems, ELKI is an excellent choice.

Featured

4 ML algorithms you must have in your ToolBox !

Algorithms have many applications.With experience, you will find that a handful of algorithms can solve most of your problems. This article will be the following of last one,it adds other 4 useful CML(Classic Machine Learning) algorithms you need in your toolbox.

The following 4 algorithms are the go-to algorithms for CML problems.

Support Vector Machine Algorithm (SVM)

Support Vector Machine Algorithm (SVM)

The SVM is technically a linear classifier, but there’s a method that will also allow it to handle complex non-linear data.

For its input, the SVM is effective with numeric features only, but most implementations of the algorithm allow you to transform categorical features to numerical values. The SVM output is a class prediction.The algorithm tries to create the optimal hyperplane decision boundary between the classes by maximizing the margin between the support vectors.

The SVM algorithm has several advantages:

  • SVMs have fewer parameters to set up when building the model.
  • SVM algorithms have a good theoretical foundation.
  • SVMs are extremely flexible in the type of data they can support.
  • SVMs require less computational resources to get an accurate model than decision trees.
  • SVMs are not sensitive to noisy data.
  • SVM is a good algorithm for binary two-class outputs.
  • You can accomplish non-linear classification with SVMs by using kernel transformation.
  • SVMs can work well with a large number of features and less training data.

K-Means Algorithm

K-Means Algorithm

Clustering is the main task of explorative ML, and when seeking a clustering algorithm,k-means is the usual starting point. It works well for many datasets. the k-means algorithm is iterative. The algorithm tries to partition the N observations into K clusters. You must start with the number of clusters.

The main drawback of the k-means algorithm is that you are required to know upfront how many clusters there are. In the example shown, K=3. The algorithm then chooses three initial “means” randomly and creates the initial clusters by assigning each observation to the nearest mean. The centroid of each cluster becomes the new mean and the process repeats until convergence is achieved.

DBSCAN Algorithm

DBSCAN Algorithm

DBSCAN stands for density-based spatial clustering of applications with noise.

The DBSCAN algorithm employs an approach not unlike human intuition to identify clusters and noise.

To accomplish this, DBSCAN requires two important parameters:

• MinPts: The number of dimensions in the dataset. The value must be at least 3.

• e: Epsilon is Euclidean distance. Small values are preferable. If e is too small, a large part of the data will not cluster. If e is too large, the clusters will merge. Choosing a good e value is the key to success with DBSCAN.

DBSCAN is one of the most common clustering algorithms and its advantages include

• DBSCAN does not require prior knowledge of the number of clusters.

• DBSCAN can find any shape of cluster.

• DBSCAN can find outliers.

• DBSCAN can identify noise.

• DBSCAN requires just two parameters.

• The ordering of the dataset does not matter.

The key disadvantages of DBSCAN include

  • The quality of DBSCAN depends on the e value. For high-dimensional data, it can be difficult to find a good value for e. This is the so-called curse of dimensionality. If the data and scale are not well known, it is hard to choose e.
  • DBSCAN cannot cluster datasets well with large differences in density.

Note that the optics algorithm is a hierarchical version of DBSCAN. The HDBSCAN algorithm is a faster version of the optics algorithm

Expectation-Maximization (EM) Algorithm

Expectation-Maximization (EM) Algorithm

When k-means fails to achieve desirable results, consider the EM algorithm. EM often gives excellent results for real-world datasets, especially if you have a small region of interest.

EM is an iterative algorithm that works well when the model depends on unobserved latent variables. The algorithm iterates between two steps: expectation (E) and maximization (M). In the expectation step (E), a function is created for the expectation of likelihood. In the maximization step (M), parameters are created to maximize the expected likelihood in the E step.

Conclusion:

Whether you are classifying or clustering, algorithm prediction accuracy is the key measure of the chosen algorithm’s performance.

The degree of accuracy you require is relative to the problem you are trying to solve. If you are building an ML model to determine the best day to play golf, a 90% confidence rate is acceptable. If you are trying to determine if a photograph of a skin spot is cancerous, or if a plot of land contains a landmine, 90% would not be acceptable.

Featured

A Carrier-Changing Algorithms to make you a better Data Analyst

With experience, you will find that a handful of algorithms can solve most of your problems. This article will cover the 3 most useful CML(Classic Machine Learning) algorithms you need in your toolbox.

The following 3 algorithms are the go-to algorithms for CML problems. The list includes 3 classifiers algorithms:

I-Naive Bayes (NB)

NB is a probability-based modeling algorithm based on Bayes’ theorem. Bayes’ theorem simply states the following:

 The probability of an event is based on prior knowledge of conditions that might be related to the event. 

Bayes’ theorem discusses conditional probability. Conditional probability is the likelihood that event A occurs given that condition B is true.

For example, consider human eyesight and its relationship to a person’s age. According to Bayes’ theorem, age can help assess more accurately the probability that a person wears glasses, compared to an assessment made without knowledge of the person’s age. In this example, the age of the person is the condition.

The reason for the naive part of the name is that the algorithm makes a very naive assumption about the independence of the attributes.

 Some advantages of NB algorithms include:

1- NB is good for spam detection where classification returns a category such as spam or not spam.

2- NB can accept categorical and continuous data types.

3- NB can work with missing values in the dataset by omitting them when estimating probabilities.

4- NB is also effective with noisy data because the noise averages out with the use of probabilities.

5- NB is highly scalable and it is especially suited for large databases.

6- NB can adapt to most kinds of classification,and it’s an excellent algorithm choice for document classification, spam filtering, and fraud detection.

7- NB is good for updating incrementally.

8- NB offers an efficient use of memory and fast training speeds. The algorithm is suitable for parallel processing.

For disadvantage of NB is that this algorithm does not work well when data attributes have some degree of correlation. This violates the naive assumption of the algorithm.

II- Random forest (classify)

Random Forest

To understand RF, it is first necessary to understand decision trees. This last is a supervised learning method for classification. Also,they grow using the training data set. 

The decision trees can classify instances in the test data set,and are a divide-and-conquer approach to learning.

Random forests is great with high dimensional data since we are working with subsets of data. It is faster to train than decision trees because we are working only on a subset of features in this model, so we can easily work with hundreds of features.

The RF algorithm has several advantages:

1- RF is easy to visualize so you can understand the factors that lead to a classification result. This can be very useful if you have to explain how your algorithm works to business domain experts or users.

2- Each tree in a random forest grows the structure on random features,minimizing the bias.

3- Unlike the naive Bayes algorithm, the decision tree-based algorithms work well when attributes have some correlation.

4- RF is one of the most simple, robust, and easily understood algorithms.

5- The RF bagging feature is very useful. It provides strong fit and typically does not over-fit.

6- RF is highly scalable and gives reasonable performance.

RF has some disadvantages:

1- Decision trees can be slow with large training times when they are complex.

2- Missing values can pose a problem for decision tree-based algorithms.

3- Attribute ordering is important, such that those with the most information gain appear first.

The RF algorithm is a good compliment to the naive Bayes algorithm. One of the main reasons RF has become popular is because it is very easy to get good results.

III- K-Nearest Neighbors Algorithm (KNN)

K-Nearest Neighbors Algorithm (KNN)

The k-nearest neighbors algorithm is a simple algorithm that yields good results. KNN is useful for classification and regression.

KNN algorithms classify each new instance based on the classification of its nearby neighbor(s).

KNN advantages:

1- KNN makes no assumptions on the underlying data.

2- KNN is a simple classifier that works well on basic recognition problems.

3- KNN is easy to visualize and understand how classification is determined.

4- Unlike naive Bayes, KNN has no problem with correlated attributes and works well with noisy data if the dataset is not large.

KNN disadvantages:

1- Choosing K can be problematic and you may need to spend time tuning K values.

2- KNN is subject to the curse of dimensionality due to reliance on distance-based measures. To help combat this, you can try to reduce dimensions or perform feature selection prior to modeling.

3- KNN is instance-based and processes the entire dataset for classification, which is resource intensive. KNN is not a great algorithm choice for large datasets.

4- Transforming categorical values to numeric values does not always yield good results.

5- As a lazy classifier, KNN is not a good algorithm choice for real-time classification.

As a conclusion, KNN is a simple, useful classifier. Consider it for the initial classification attempt, particularly if the disadvantages listed above are not an issue for your problem.

Featured

7 ML Algorithms will solve 95% of the problems

There is a popular saying among data scientists,Algorithms are easy,the hard part is the data.Each of these boxes highlights the key algorithms you need to know.As you navigate the flowchart, the decision nodes depend on the amount of data and the type of data.

In some cases, you will find there is more than one algorithm that you could use. The general rule of thumb is to start simple by running the basic algorithms first.

Sometimes ML(Machine Learning) practitioners take a more functional approach to algorithm selection.Cloud platforms use this approach when they wish to remove users from the complications associated with the data type decisions required to choose an algorithm.Microsoft Azure ML does a particularly good job of using this approach to help users choose the correct algorithm.

The idea is to ask yourself the simple question, “What do I want to find out?” The answer to the question will lead you to the correct learning style and then to specific algorithms.

The picture below describes some answers to this question:

Conclusion:

The figure above,guide you in the selection of the best ML algorithm. With experience, you will find that a handful of algorithms can solve most of your problems.

I think the most seven useful CML (Classic Machine Learning) algorithms you need in your toolbox,includes four classifiers and three clustering algorithms:

• Naive Bayes (classify)

• Random forest (classify)

• K-nearest neighbors (classify)

• Support vector machine (classify)

• DBSCAN (cluster)

• Expectation-maximization (cluster)

• K-means (cluster)

Of course, the special case will arise when you need to reach for an obscure algorithm, but 95% of time, these seven algorithms will deliver excellent results.

Featured

Conventional wisdom for ML algorithms

There is a conventional wisdom for algorithm selection. Answers to the following questions help to determine which algorithm is best suited for your model:

How much data do you have ?

• What are you trying to predict?

• Is the data labeled or unlabeled?

• Do you require incremental or batched training?

As you gain experience, you can quickly determine which algorithm is the best match for your problem and data.

Algorithm Styles

The world of ML(Machine Learning) algorithms is bifurcated into two equally important and useful categories. Before introducing the fancy terminology scientists use to describe each category, let’s first look at the types of data that define each category.

Labeled vs. Unlabeled Data

I defined the term label as what you are attempting to predict or forecast some organizations consider labeled data as more valuable than unlabeled data.

Organizations sometimes even consider unlabeled data as worthless. This is probably shortsighted. You shall see that ML can use both labeled and unlabeled data.

Whether or not the data contains labels is the key factor in determining the ML algorithm style. ML algorithms fall into three general ML styles:

  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning

Supervised learning

The input training data has known labels, a model is prepared by correcting wrong predictions.The training process continues until a desired accuracy is reached.

Unsupervised learning

The input data is not labeled and does not have a known result. A model is prepared by deducing structures present int the input data to extract rules, or to organize the data by similarity.

Semi-supervised learning

Input data is a mixture of labeled and unlabeled. There is a desired prediction.But the model must learn the structures and make the prediction.

The most important algorithms fall into one of these categories. Supervised algorithms use data with labels. Unsupervised algorithms use data without labels. Semi-supervised algorithms use data with and without labels.

Supervised learning is the easiest ML learning style to understand. Supervised learning algorithms operate on data with labels.

With ML, it is hard to avoid manual work with data. I refer back to Mr. Silver’s interesting quote

about expecting more from ourselves before we expect more from our data.

Conclusion:

The main reasons developers shy away from deploying ML is that algorithm selection and model creation are too complicated. Fortunately, you can overcome the algorithm complexity issue by learning some basic principles and gaining an understanding of the scientific language associated with ML algorithms.

Featured

1 big secret about data storage which will help you succeed 100% in your ML project

Wedefine unstructured data as data with little or no metadata and little or no classification. ML (Machine Learning) often uses unstructured data. Unstructured data includes many categories, such as videos, emails, images, IoT(Internet of things) device data, file shares, security data, surveillance data, log files, web data, user and session data, chat, messaging, twitter streams sensor data, time series data, IoT device data, and retail customer data.

Unstructured data can be characterized by the three Vs: volume, velocity, and variety.

• Volume: Size of the data.

• Velocity: How fast the data is generated. Jet engine sensors, for example, can produce thousands of samples per second.

• Variety: There are many different kinds of data.

The problem with traditional databases is that they are hard to scale and not well suited for unstructured data. One of the best ways to store unstructured data in the cloud is with NoSQL databases because they do a much better job at handling this type of data.

To understand how NoSQL databases differ from traditional RDBMS ( Relational Database Management System)databases, it is useful to review the CAP theorem, originally described by Eric Brewer. The CAP theorem states that for distributed database architectures, it is impossible to simultaneously provide more than two out of the following three guarantees:

  • Consistency: Every read receives the most recent write or an error.
  • Availability: Can always read or write to the system, without guaranteeing that it contains the most recent value.
  • Partition tolerance: The system continues to operate despite an arbitrary number of messages being dropped or delayed by the network between nodes.

Database theorists used two interesting terms to describe these database philosophies:

• ACID: Atomicity, Consistency, Isolation, Durability

• BASE: Basically Available, Soft state, Eventual consistency

RDBMS databases choose ACID for consistency and availability. Distributed NoSQL databases choose BASE for either partitioning/consistency or partitioning/availability.Many popular NoSQL databases use the BASE philosophy.

Google Bigtable

Google’s NoSQL big data database service. Google says it can handle massive workloads with low latency and high throughput. It powers many of the Google services such as Maps, Gmail, and Search.

AWS DynamoDB

Fully managed proprietary NoSQL database from Amazon. DynamoDB supports key-value and document data structures. High durability and availability.

Apache HBASE

A distributed, scalable big data store. HBASE is the HADOOP database. The Apache project’s goal is hosting very large tables of billions of rows and millions of columns.Written in Java and modelled after Google’s Bigtable.

Riak KV

A distributed NoSQL database from Basho. Allows you to store massive amounts of unstructured key-value data. Popular solution for IoT.

Apache Cassandra

Highly scalable NoSQL database. Claims to outperform other NoSQL databases due to architectural choices. Used by Netflix, Apple, EBay, etc.

MongoDB

Cross-platform, document-based, NoSQL database based on JSON-like documents.

CouchDB

Distributed NoSQL document-oriented database optimized for interactive applications.

Conclusion:

Data size and performance are also important factors to consider when selecting a NoSQL database. MongoDB and CouchDB are excellent choices for small to medium dataset sizes, while Cassandra is excellent for large datasets.

Featured

Don’t reinvent the wheel,leverage the cloud for ML

There may be times when you don’t need to build and deploy your own ML models. In these cases, you can leverage the cloud APIs provided by the big four cloud providers.The high-level APIs provided by the big four cloud providers: Amazon, Google, IBM,and Microsoft.

All of their APIs fall into five distinct categories: language, vision, data insights, speech, and search.

While Google and AWS do a great job at providing the lower-level tools and building blocks we need to implement ML solutions, IBM and Microsoft do an equally fine job at providing higher-level models we can access by API shows that they have many APIs to solve a wide variety of problems in each of the five categories.

Most of these APIs employ DL(Deep Learning) methods created from the massive amount of data the cloud providers own. The APIs are mostly free to try. If you decide to use these APIs commercially, you will typically just need to pay the cloud provider’s inference fee per API call. Recall that the inference fee is the fee to make predictions. You can make real time predictions or batch predictions.

Whether you use alternative ML(Machine Learning) APIs or ML APIs from the big four cloud providers,there are a huge number of product offerings you can choose from. If you think back to the M-Gates, at MLG6, you must start with a well-defined problem. At that point, it is a best practice to scan the available APIs to see if any of them exactly match the problem.

There is no need to reinvent the wheel. The large cloud providers have so much data, it would be hard to create a better solution than the models they make available to us. While the large four cloud providers have many APIs, it can be fruitful to explore if external alternatives are available.

Alternative ML API Providers

There are times when you might want to consider alternative cloud API model providers.

If you have a niche application not covered by the big cloud players, alternative providers who specialize in certain applications could provide a solution.

Sometimes you just wish to differentiate your product from competitors who all use the large cloud provider APIs. Using alternative smaller cloud API providers in these cases could be a viable strategy.

For Data extraction:

www.diffbot.com/products/automatic/

For Emotion and vocal analytics from an Israeli company: www.beyondverbal.com

For Face recognition:

www.kairos.com/face-recognition-api

For Chat bot :

https://wit.ai/getting-startedand

And real-time license plate recognition:

www.openalpr.com/cloud-api.html

Conclusion:

Whether you use alternative ML APIs or ML APIs from the big four cloud providers,there are a huge number of product offerings you can choose from. If you think back to the M-Gates, at MLG6, you must start with a well-defined problem. At that point, it is a best practice to scan the available APIs to see if any of them exactly match the problem.There is no need to reinvent the wheel. The large cloud providers have so much data,it would be hard to create a better solution than the models they make available to us. While the large four cloud providers have many APIs, it can be fruitful to explore if external alternatives are available.

For M-Gates methodology, visit this link to know more :

Featured

Cloud + machine learning = business growth

IaaS (Infrastructure as a service) solutions allow you to scale your compute environment to match your demands in terms of CPU, memory, and storage requirements. You only need to pay for resources required. The also give you the ability to easily distribute your resources across geographic regions.

This approach is much easier and affordable than building your own servers and upgrading them when they became too slow.

One of the advantages of creating CML(Classic Machine Learning) solutions compared to DL(Deep Learning) is that they require for less data and CPU resources. This generally enables you to create solutions entirely on the desktop. However, you should not overlook the cloud. The cloud providers continuously improve their ML(Machine Learning) offerings. Today they provide an amazing array of services and APIs that make it easier than ever for developers who do not have prior ML experience to create and deploy ML solutions,for many considerations :

Local resource availability:

Do you have a local desktop machine or server that can process large data sets and build ML models ? Local processing allows you to retain control of your data and avoid cloud usage fees.

Deep learning :

Deep learning projects tend to favor cloud-based architectures because of their reliance on larger data sets and high computational requirements for model creation.

Geographic diversity:

The cloud providers can allow you to spin up resources in a variety of countries and regions globally. It is advantageous to place resources as close as possible to users.

Data size:

Do you have a data set size that is manageable on the desktop, as if often the case for CML projects ?

Scalability:

Do you anticipate your data or storage requirements will grow in the future ? Cloud providers offer much better scalability. Adding cloud resources is much easier than upgrading or purchasing a more powerful desktop/server.

Time constraints:

Is model creation time important? Even for CML projects with modest to large data sets, creating the model on the desktop or server single CPU could take minutes to hours. Moving these computation-intensive operations to the cloud could drastically cut your model creation times.If you need real-time or near real-time creation times, the cloud is your only option.

Availability:

Do you require high availability ? Your project can benefit from the distributed, multi-node architectures provided by all of the cloud providers.

Security considerations:

If you operate your own Internet-connected server, you know what a challenge security is. Cloud providers simplify security because you can leverage their massive infrastructure.

Privacy considerations:

Your clients might not want their data on a public cloud network managed by one of the big four providers. In this case, you can implement a private cloud solution and charge a premium.

Even if you decide against using a cloud provider for your project, it is important to keep an eye on their product offerings. The services are constantly being updated, and your decision may change based on those updates, 

Due to fierce competition among the largest cloud providers, the cost of cloud resources today is largely identical across platforms. The big four are keenly aware of their competitor’s offerings, and pricing arbitrage opportunities no longer exist.Due to fierce competition among the largest cloud providers, the cost of cloud resources today is largely identical across platforms. The big four are keenly aware of their competitor’s offerings, and pricing arbitrage opportunities no longer exist.Cloud ML services are not free. Regardless of the type of container or virtualization technology they use, dedicated or shared hardware (CPU, memory,storage) is required at some point. Each provider typically has a free trial so you can experiment with the service before buying.Due to fierce competition among the largest cloud providers, the cost of cloud resources today is largely identical across platforms. The big four are keenly aware of their competitor’s offerings, and pricing arbitrage opportunities no longer exist.

The goal of building a model is to utilize it to make predictions. AWS ML allows for real-time, single, or batch predictions. Batch predictions are particularly useful, allowing you to load many instances to classify as a batch. AWS ML accomplishes this by letting you load the batch predictions into an S3 storage bucket, in exactly the same way you loaded the original dataset. 

You then just need to specify the S3 location of the batch predictions and then the model will produce the results. Making batch predictions does have an incremental cost.

AWS SageMaker is a fully managed platform to help you build DL models. It is one of the recently added AWS services. The main idea behind SageMaker is that ML has been difficult for developers for the following reasons:

  • The process of gathering data, processing data, building models, testing models, and deploying models creates excessive manual work for developers.
  • Due to repetitive manual work, creating ML solutions is too time consuming.
  • Creating ML solutions is too complicated because the required data and analytic skill sets have replaced traditional software development. 

SageMaker tries to address these issues. It promises to remove complexity and overcome the barriers that slow down developers like all AWS services, there is extensive online documentation to help you understand the service. The link for the SageMaker developer guide is https://docs.aws.amazon.com/sagemaker/latest/dg

SageMaker has a lot of potential. Two particularly important features make it a powerful way to implement ML on AWS- notebook instances, and its flexible support for algorithms.

The SageMaker notebook instance is a compute instance running the Jupyter Notebook App. Jupyter is an open source web app that runs on Python (hence its spelling) and allows you to create and share documents that contain live code and visualizations. It is very popular in the Python and DL realms.

The other interesting feature of SageMaker is its algorithm flexibility. SageMaker supports two classes of algorithms: built-in algorithms and bring-your-own algorithms.

The list of built-in algorithms is available at https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html.The algorithm list is very complete. AWS claims the preinstalled algorithms deliver 10 times the performance of other providers due to optimization. That’s an impressive claim. However, AWS does not offer details on how they do this, or for which algorithms it applies.

Finally,user can bring their own algorithms or frameworks. The SageMaker examples on GitHub show how to do this for a variety of models and algorithms including XGBoost, k-means, R, scikit, MXNet, and TensorFlow.

Featured

Taking the ownership of your data,brings to you the power !

A first step in taking ownership for your data is classifying the type of data itself. Before you can understand which algorithm is best suited for your well-defined ML problem,you need to understand the nature and type of the data you possess. 

Two broad types of data,qualitative and quantitative :

  • Qualitative data, classified as nominal if there is no natural order between the categories (such as eye color).and ordinal if an ordering exists (such as test scores or class rankings).
  •  Quantitative data, classified as discrete, if the measurements are integers (such as population of a city or country). and continuous, if the measurements can take on any value, usually within some range (such as a person’s height or weight).

It’s good to mention that JSON(JavaScript Object Notation) format is an important part of ML solutions. One of the JSON advantages is that libraries exist for almost every development platform. It is truly cross-platform. 

You might be asking why we need JSON for data files when we already have CSV and ARFF that are perfectly capable of representing data for ML (Machine Learning). There are two reasons you may want to consider using JSON:

1- JSON is ideal for data interchange over the network. If you need to send data to a networked device, it is a trivial task with JSON and HTTP, but it is not as simple to accomplish with CSV and ARFF.Also, If you use JSON as a data format, you need to validate your JSON after creation. There are many online tools that can perform JSON validation. Many of them are open source or created with scripting languages, so you can run the validation locally if you wish.

2- Many NoSQL databases use JSON files as the object store for their data. This database architecture solves the scalability problem presented by large amounts of data.

When it comes to data Preprocessing, I think there is no substitute for getting to know your data. It is a time-intensive manual exercise. Investing the time up front to analyze your data to improve its quality and integrity always pays dividends in the latter phases of the ML project.

Missing values and duplicates are an important aspect of data preprocessing. Missing values can take the form of blanks, dashes, or NaN. Missing values are not hard to find. The difficulty lies with what action you should take when you find them. Missing values tend to fall into two categories:

• MCAR (Missing Completely At Random)

  • Systematically missing: The values are missing for a good reason, just because the value is missing does not tell you why the value is missing.

 When you find missing values, you have to think carefully about the resolution. There are multiple approaches you can consider when handling missing values :

  • Take no action. Preserve the value as missing.
  • Replace the value with a “Not Tested” or “Not Applicable” indicator.
  • If a label contains a missing value, you should consider deleting the entire instance because it does not add value to a model you train.
  • Sometimes you have normalized values within a range, and assigning a value minimum or maximum value can make the algorithm more efficient.
  • Impute a value for the missing value. Impute means to replace the value with a new value based on study of other attributes.

I will introduce the Weka ML environment in an other article,as it is a tool for Knowledge analysis. Weka has many capabilities for preprocessing data using its Java-based tools. However, you can also use the macro processing capabilities of OpenOffice Calc to preprocess your data.

Moreover, for creating your own data I listed private data and synthetic data as potential data sources. We generate these two classes of data. Synthetic data represents data created by a computer. We are all carry the greatest data collection device ever created: the smartphone.

Another key to remember,being able to visualize your data is important. Visualization allows you to gain insights into your data easily. Data visualization is one of the best tools you can add to your toolkit on the journey to demanding more from yourself with respect to your data.

Above all,you are on the path to becoming data scientist when you follow these best practices:

  • To develop ML applications, you must adopt a data-driven methodology.
  • To develop successful ML applications, you must demand more of yourself with respect to your data.
  • Most of your code is data wrangling. The 80/20 rule applies: for any given project you undertake, 80% of your time will be spent working with the data.
  • High-quality, relevant data for a well-defined problem is the starting point.
  • Understand what type of data you have. 
  • Define your data types and consider keeping them in a data dictionary.
  • There are many sources you can use for your ML application data: public, private, government, synthetic, etc.
  • You can generate your own data. 
  • You have many tools that you can use to manipulate data, including the Open Office Calc spreadsheet program. You will explore additional data filtering tools available in ML environments.
  • The JSON, CSV, and ARFF formats are popular data formats for ML. Get comfortable with them all.
  • Most entities do not have enough high-quality data for DL(Deep Learning), while CML(Classic Machine Learning) applications only require a reasonable amount of data to succeed.
  • The smartphone is the best data collection device ever invented.
  • Visualization is a key aspect of ML and understanding your data.
  • To help you visualize your data, you can leverage third-party packages that make it easy to visualize data in the browser and on android devices.
Featured

Why is the ML (machine learning) revolution happening now ?

is not the first time. The previous AI(Artificial Intelligence) booms and subsequent winter periods. How do we know if this time it is for real ? I think Three transformational megatrends are responsible for the movement.

This Three megatrends have paved the way for the machine learning revolution we are now experiencing:

1) Explosion of data

2) Access to highly scalable computing resources

3) Advancement in algorithms

Explosion of Data:

There is a widely quoted statistic from IBM that states that 90% of all data on the Internet today was created since 2016. Large amounts of data certainly existed prior to 2016, so the study confirms what we already knew: people and devices today are pumping out huge amounts of data at an unprecedented rate. IBM stated that more than 2.5 exabytes (2.5 billion gigabytes) of data is generated every day.

Data observation

We can digitize practically anything today. Once digitized, the data becomes eligible for machine learning.

Highly scalable computing resources:

Cloud service providers have changed the game for practitioners of ML. They give us on-demand highly scalable access to storage and computing resources.

 These resources are useful for many ML functions, such as the following:

  • Storage: We can use cloud services as a repository for our ML data.
  • CPU resources: We can create ML models more quickly by configuring highly available distributed compute clusters with a large CPU capacity.
  • Hosting: We can provide hosted access to our data or ML models using API or other interface methods.
  • Tools: All of the cloud providers have a full suite of tools that we can use to create ML solutions.

Advancement in Algorithms:

ML algorithms have been around for quite some time. However, once the explosion in data and IaaS providers began to emerge, a renewed effort to optimize their performance began to take place.

As a conclusion, data is the single most important ingredient for a successful ML project. You need high quality data, and you need lots of it. You need a good understanding of your data before you can construct ML models that effectively process your data.

 In The Signal and the Noise by Nate Silver, the author encourages us to take ownership for our data. This is really the essence of thinking like a data scientist.

Mr. Silver summed it up perfectly:

The numbers have no way of speaking for themselves. We speak for them.Data-driven predictions can succeed, and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.

Featured

To think differently, you need a new data-driven methodology !

Deep learning (DL) is a sort of Wonderland. It is responsible for all of the hype we have in the field today. However, it has achieved that hype for a very good reason.You will often hear it stated the DL operates at scale. What does this mean exactly ?

It is a performance argument, and performance is obviously very important.

CML (classic machine learning) slightly outperforms DL for smaller data set sizes. The question is, how small is small ? When we design ML(Machine learning ) apps, we need to consider which side of the point of inflection the data set size resides. There is no easy answer. If there were, we would place the actual numbers on the x-axis scale. It depends on your specific situation and you will need to make the decision about which approach to use when you design the solution.

Deep learning has demonstrated superior results versus CML in many specific areas including speech, natural language processing, computer vision, playing games, self-driving cars, pattern recognition, sound synthesis, art creation, photo classification, irregularity (fraud) detection, recommendation engine, behavior analysis, translation, just to name a few.

As you gain experience with ML, you begin to develop a feel for when a project is a good candidate for DL.

Perhaps the biggest challenge of producing ML applications is training yourself to think differently about the design and architecture of the project. You need a new data-driven methodology.

The Figure below introduces the ML-Gates Methodology:

The methodology uses these six gates to help organize CML and DL development projects. Each project begins with ML-Gate 6 and proceeds to completion at ML-Gate 0. The ML-Gates proceed in a decreasing order.

Think of them as leading to the eventual launch or deployment of the ML project.

As developers, we write a lot of code. When we take on new projects, we typically just start coding until we reach the deliverable product. With this approach, we typically end up with heavily coded apps.

With ML, we want to flip that methodology on its head. We instead are trying to achieve data-heavy apps with minimal code. Minimally coded apps are much easier to support.

As a conclusion, MLG-1 is where you recognize the coding time savings. It usually only take a few lines of code to open a prebuilt model and make a new prediction.This phase of the methodology also includes system testing of the solution.

Featured

Machine learning will be a must for every business

I believe that machine learning (ML) is a generic term that includes the subfields of deep learning (DL) and classic machine learning (CML). It is seen as a subset of artificial intelligence. For this last,it is anything that pretends to be smart.

The table below gives the definition and domain of each field:

For Deep learning (DL) is a class of machine learning algorithms that utilize neural networks.

Some technologies become widespread and commonly used, while other simply fade away. Recall that just a few short years ago 3D movies were expected to totally overtake traditional films for cinematic release. It did not happen.

It is important for us to continue to monitor the ML and DL technologies closely.

It remains to be seen how things will play out, but ultimately, we can convince ourselves about the viability of these technologies by experimenting with them, building, and deploying our own applications.

Challenges and Concerns

As with any IT initiative, there is an opportunity cost associated with implementing it, and the benefit derived from the initiative must outweigh the opportunity cost, that is, the cost of forgoing another potential opportunity by proceeding with AI/ML.

These strategies, summarized below, are even available to small organization and individual freelance developers.

Data Science Platforms

If you ask business leaders about their top ML objectives, you will hear variations of the following:

• Improve organizational efficiency

• Make predictive insights into future scenarios or outcomes

• Gain a competitive advantage by using AI/ML

• Monetize AI/ML

You wish to create a recommendation engine for visitors to your website. You would like to use machine learning to build and train a model using historical product description data and customer purchase activity on your website. You would then like to use the model to make real-time recommendations for your site visitors. This is a common ML use case ML Monetization, One of the best reasons to add ML into your projects is increased potential to monetize.

You can monetize ML in two ways: directly and indirectly.

  • Indirect monetization: Making ML a part of your product or service.
  • Direct monetization: Selling ML capabilities to customers who in turn apply them to solve particular problems or create their own products or services.

The table below shows some recent statistics :

These CAGRs represent impressive growth. Some of the growth is attributed to DL.However, you should not discount the possible opportunities available to you with CML,especially for mobile devices.

As a conclusion, the data show that ML for mobile apps has approximately triple the funding of the next closest area, NLP. The categories included show that many of the common DL fields, such as computer vision, NLP, speech, and video recognition, have been included as a specific category. This allows us to assume that a significant portion of the ML apps category is classic machine learning.