A Map to Perfection: Using D3.js to Make Beautiful Web Maps

by Tomislav Bacinger – Software Engineer @ Toptal

Data Driven Documents, or D3.js, is “a JavaScript library for manipulating documents based on data”. Or to put it more simply, D3.js is a data visualization library. It was developed by Mike Bostock with the idea of bridging the gap between static display of data, and interactive and animated data visualizations.

D3 is a powerful library with a ton of uses. In this tutorial, I’ll discuss one particularly compelling application of D3: map making. We’ll go through the common challenges of building a useful and informative web map, and show how in each case, D3.js gives capable JavaScript developers everything they need to make maps look and feel beautiful.

What is D3.js used for?

D3.js can bind any arbitrary data to a Document Object Model (DOM), and then, through the use of JavaScript, CSS, HTML and SVG, apply transformations to the document that are driven by that data. The result can be simple HTML output, or interactive SVG charts with dynamic behavior like animations, transitions, and interaction. All the data transformations and renderings are done client-side, in the browser.

At its simplest, D3.js can be used to manipulate a DOM. Here is a simple example where D3.js is used to add a paragraph element to an empty document body, with “Hello World” text:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>D3 Hello World</title>
    <script src="http://d3js.org/d3.v3.min.js"></script>
  </head>
  <body>
    <script type="text/javascript">
      d3.select("body").append("p").text("Hello World");
    </script>
  </body>
</html>

The strength of D3.js, however, is in its data visualization ability. For example, it can be used to create charts. It can be used to create animated charts. It can be even used to integrate and animate different connected charts.

D3 for Web Maps and Geographic Data Visualization

But D3.js can be used for much more than just DOM manipulation, or to draw charts. D3.js is extremely powerful when it comes to handling geographical information. Manipulating and presenting geographic data can be very tricky, but building a map with a D3.js is quite simple.

Here is a D3.js example that will draw a world map based on the data stored in a JSON-compatible data format. You just need to define the size of the map and the geographic projection to use (more about that later), define an SVG element, append it to the DOM, and load the map data using JSON. Map styling is done via CSS.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>D3 World Map</title>
    <style>
      path {
        stroke: white;
        stroke-width: 0.5px;
        fill: black;
      }
    </style>
    <script src="http://d3js.org/d3.v3.min.js"></script>
    <script src="http://d3js.org/topojson.v0.min.js"></script>
  </head>
  <body>
    <script type="text/javascript">
      var width = 900;
      var height = 600;

      var projection = d3.geo.mercator();
      
      var svg = d3.select("body").append("svg")
          .attr("width", width)
          .attr("height", height);
      var path = d3.geo.path()
          .projection(projection);
      var g = svg.append("g");
      
      d3.json("world-110m2.json", function(error, topology) {
          g.selectAll("path")
            .data(topojson.object(topology, topology.objects.countries)
                .geometries)
          .enter()
            .append("path")
            .attr("d", path)
      });
    </script>
  </body>
</html>

Geographic Data for D3

For this D3.js tutorial, keep in mind that map building works best with data formatted in JSON formats, particularly the GeoJSON and TopoJSON specifications.

GeoJSON is “a format for encoding a variety of geographic data structures”. It is designed to represent discrete geometry objects grouped into feature collections of name/value pairs.

TopoJSON is an extension of GeoJSON, which can encode topology where geometries are “stitched together from shared line segments called arcs”. TopoJSON eliminates redundancy by storing relational information between geographic features, not merely spatial information. As a result, geometry is much more compact and combined where geometries share features. This results with 80% smaller typical TopoJSON file than its GeoJSON equivalent.

So, for example, given a map with several countries bordering each other, the shared parts of the borders will be stored twice in GeoJSON, once for each country on either side of the border. In TopoJSON, it will be just one line.

Map Libraries: Google Maps and Leaflet.js

Today, the most popular mapping libraries are Google Maps and Leaflet. They are designed to get “slippy maps” on the web fast and easy. “Slippy maps” is a term referring to modern JavaScript-powered web maps that allow zooming and panning around the map.

Leaflet is a great alternative to Google Maps. It is an open source JavaScript library designed to make mobile-friendly interactive maps, with simplicity, performance and usability in mind. Leaflet is at its best when leveraging the big selection of raster-based maps that are available around the internet, and brings the simplicity of working with tiled maps and their presentation capabilities.

Leaflet can be used with great success when combined with D3.js’s data manipulation features, and for utilizing D3.js for vector based graphics. Combining them together brings out the best in both libraries.

Google Maps are more difficult to combine with D3.js, since Google Maps are not open source. It is possible to use Google Maps and D3 together, but this is mostly limited to overlaying data with D3.js over Google Maps background maps. Deeper integration is not really possible, without hacking.

Projections – Beyond Spherical Mercator

The question of how to project maps of the 3-dimensional spherical Earth onto 2-dimensional surfaces is an old and complex problem. Choosing the best projection for a map is an important decision to make for every web map.

In our simple world map D3.js tutorial above, we used the Spherical Mercator projection coordinate system by calling

d3.geo.mercator()

. This projection is also known as Web Mercator. This projection was popularized by Google when they introduced Google Maps. Later, other web services adopted the projection too, namely OpenStreetMap, Bing Maps, Here Maps and MapQuest. This has made Spherical Mercator a very popular projection for online slippy maps.

All mapping libraries support the Spherical Mercator projection out of the box. If you want to use other projections, you will need to use, for example, the Proj4js library, which can do any transformation from one coordinate system to another. In the case of Leaflet, there is a Proj4Leaflet plugin. In the case of Google Maps, there is, well, nothing.

D3.js brings cartographic projections to a whole new level with built-in support for many different geographic projections. D3.js models geographic projections as full geometric transformations, which means that when straight lines are projected to curves, D3.js applies configurable adaptive resampling to subdivide lines and eliminate projection artifacts. The Extended Geographic Projections D3 plugin brings the number of supported projections to over 40. It is even possible to create a whole new custom projection using d3.geo.projection and d3.geo.projectionMutator.

Raster Maps

As mentioned before, one of the main strengths of D3.js is in working with vector data. To use raster data there is an option to combine D3.js with Leaflet. But there is also an option to do everything with just D3.js using d3.geo.tile to create slippy maps. Even with just D3.js alone, people are doing amazing things with raster maps.

Vector Manipulation on the Fly

One of the biggest challenges in classic cartography is map generalization. You want to have as much detailed geometry as you can, but that data needs to adapt to the scale of the displayed map. Having too high a data resolution increases download time and slows down rendering, while too low a resolution ruins details and topological relations. Slippy maps using vector data can run into a big problem with map generalization.

One option is to do map generalization beforehand: to have different datasets in different resolutions, and then display the appropriate dataset for the current selected scale. But this multiplies datasets, complicates data maintenance, and is prone to errors. Yet most mapping libraries are limited to this option.

The better solution is to do map generalization on the fly. And here comes D3.js again, with its powerful data manipulation features. D3.js enables line simplification to be done in browser.

I want more!


D3.js is not easy to master and it has a steep learning curve. It is necessary to be familiar with a lot of technologies, namely JavaScript objects, the jQuery chaining syntax, SVG and CSS, and of course D3’s API. On top of that, one needs to have a bit of design skill to create nice graphics in the end. Luckily, D3.js has a big community, and there are a lot of resources for people to dig into. A great starting point for learning D3 is these tutorials.

If you like learning by examining examples, Mike Bostock has shared more than 600 D3.js examples on his webpage. All D3.js examples have git repository for version control, and are forkable, cloneable and commentable.

If you are using CartoDB, you’ll be glad to hear that CartoDB makes D3 maps a breeze.

And for a little bonus at the end, here’s one of my favorite examples showing off the amazing things D3 is capable of:

earth, a global animated 3D wind map of the entire world made with D3.js. Earth is a visualization of global weather conditions, based on weather forecasts made by supercomputers at the National Centers for Environmental Prediction, NOAA / National Weather Service and converted to JSON. You can customize displayed data such as heights for the wind velocity readings, change overlaid data, and even change Earth projection.

Posted in Articles

Hacked By HolaKo

Hacked By HolaKo #Bingoo !

Posted in Uncategorized

In Memory Computing – Best Practice for Business Intelligence and Data Management

in-memory-computindIn-memory computing is making its way out of R&D labs and into the enterprise, enabling real-time processing and intelligence…

Gartner’s managing vice-president for business intelligence and data management, Ian Bertram, says the key application for in-memory technology today remains business intelligence, where it enables data to be conducted on the fly, with accompanying faster refreshing.

Now let Us Know the Impact of In-Memory Computing of an Organization.

The massive explosion in data volumes collected by many organizations has brought with it an accompanying headache in terms of putting it to gainful use.

Businesses increasingly need to make quick decisions, and pressure is mounting on IT departments to provide solutions that deliver quality data much faster than has been possible before. The days of trapping information in a data warehouse for retrospective analysis are fading in favour of event-driven systems that can provide data and enable decisions in real time.

Indeed, real-time computing is a new catch cry across the technology industry. In-memory computing works by bringing data physically closer to the central processing unit.

Chip manufacturers have been on this path for some time with the integration of Level 1 and Level 2 caching into microprocessors, as moving indices into Level 1 cache makes them more quickly accessible. Moving out through the caching levels usually results in a loss of speed but an increase in the size of data storage.

In-memory technology follows the same principals and moves data off disks and into main memory, eliminating the need to run a disk-seek operation each time a data look-up is performed, significantly boosting performance.

The idea of running databases in memory is nothing new, and was one of the foundations of the Business intelligence product QlikView, released by QlikTech way back in 1997. More recently other technology companies have jumped on the bandwagon, notably SAP and TIBCO. What is making in-memory so popular now is that plunging memory prices have made it economical for a wider range of applications.

Hence myOpenSourceStore-My Window to Free Professional Software provides the Best OpenSource In-Memory Computing Tools

The OpenSource In memory Computing Tools include

GORA

The Apache Gora is an open source framework which provides an in-memory data model and persistence for big data. Gora supports persisting to column stores, key value stores, document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop MapReduce support.

Download GORA

GRIDGAIN

GridGain’s open source software provides immediate, unhindered freedom to develop with the most mature, complete and tested in-memory computing platform on the market, enabling computation and transactions orders of magnitude faster than traditional technologies allow. From high performance computing, streaming and data grid to an industry first in-memory Hadoop accelerator, GridGain provides a complete end-to-end stack for low-latency, high performance computing for each and every category of payload and data processing requirements.

Download GRIDGAIN

 Hazelcast

In computing, Hazelcast is an in-memory Open Source data grid based on Java. By having multiple nodes form a cluster; data is evenly distributed among the nodes. This allows for horizontal scalability both in terms of available storage space and processing power

Typical use-cases for Hazelcast:

  • Cache frequently accessed data in-memory, often in front of a database
  • Store temporal data like web sessions
  • In-memory data processing/analytics
  • Memcached alternative with protocol compatible interface
  • Cross-JVM communication/shared storage

Download HAZELCAST

 NMemory

NMemory is an open source In-Memory computing database. It can be hosted by .NET applications and supports traditional database features like indexes, foreign key relations, transaction handling and isolation, stored procedures, query optimization, field constraints.

Currently it just serves as the core component of the Effort library. However, developer interest and contribution could make it a more robust engine that might serve well in a wide range of scenarios.

Download NMEMORY

Posted in Uncategorized

WHAT PURPOSE DOES THESE DATA MINING TOOLS FULFILL FOR AN ORGANISATION???

dataminingMost of the internal auditors those working in customer-focused industries, are aware of data mining and what it can do for an organization — reduce the cost of acquiring new customers and improve the sales rate of new products and services. However, whether you are a beginner internal auditor or a seasoned veteran looking for a refresher, gaining a clear understanding of what data mining does and the different data mining tools and techniques available for use can improve audit activities and business operations across the board.

WHAT IS DATA MINING?

Data Mining is the computational process of discovering process in large data sets involving methods using the artificial intelligence, machine learning and database systems. The main objective of the data mining is to extract information from a dataset and transform it into an understandable structure for further use. The popularly used term KBB (Knowledge Discovery in Databases) is also considered as Data Mining.

DATA MINING TOOLS
Different types of data mining tools are available at myOpenSourceStore, each with their own strengths and weaknesses. Internal auditors need to be aware of the different kinds of data mining tools available and recommend the use of a tool that matches the organization’s current detective needs. This should be considered as early as possible in the project’s lifecycle, perhaps even in the feasibility study.

Most data mining tools can be classified into one of three categories:

Traditional data mining tools

Traditional data mining programs help companies establish data patterns and trends by using a number of complex algorithms and techniques. Some of these tools are installed on the desktop to monitor the data and highlight trends and others capture information residing outside a database. The majority are available in both Windows and UNIX versions, although some specialize in one operating system only.

Dashboards

Dashboards installed in computers to monitor information in a database reflect data changes and updates onscreen, often in the form of a chart or table enabling the user to see how the business is performing. Historical data also can be referenced, enabling the user to see where things have changed (e.g., increase or decrease in sales from the same period last year). This functionality makes dashboards easy to use and particularly appealing to managers who wish to have an overview of the company’s performance.

Text-mining tools

The third type of data mining tool sometimes is called a text-mining tool because of its ability to mine data from different kinds of text from Microsoft Word and Acrobat PDF documents to simple text files. These tools scan content and convert the selected data into a format that is compatible with the tool’s database, thus providing users with an easy and convenient way of accessing data without the need to open different applications.

As said before myOpenSourceStore – My Window to Free Professional Software provides the Best Five Open Source Data Mining Tools .

The Open Source Data Mining Tools include

RapidMiner

RapidMiner is an Open Source Data Mining Tool used for the purpose of machine learning, data mining, text mining, predictive analytics and business analytics. RapidMiner can easily integrate your own specialized algorithms by leveraging its powerful and open extension APIs. RapidMiner Studio breaks away from the limitations of traditional data RapidMiner- 1analysis tools and allows you to work with large data sources for In-memory, in-database and in-Hadoop analytics for every size data source.

Download RAPIDMINER

KNIME

KNIME also called as Konstanz Information Miner is an Open Source Data Mining Tool Used for data analytics, reporting and integration platform. The modular data pipelining concept of KNIME integrates various components for machine learning and KNIME_1data mining. It can also used in other areas like CRM customer data analysis, business intelligence and financial data analysis.

Download KNIME

Apache Mahout

Apache Mahout is an Open Source data Ming Tool which is designed to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily in the areas of Mahout-1collaborative filtering, clustering and classification. Mahout also provides Java libraries for common mathematics operations and primitive Java collections.

Download APACHE MAHOUT

Keel

Keel is an open Source Data Mining Tool used to assess evolutionary algorithms for Data Mining problems including regression, classification, clustering and pattern mining. It also allows us to performKeel-1 a complete analysis of any learning model in comparison to existing ones, including a statistical test module for comparison.

Download KEEL

RATTLE

Togaware developed an Open Source Data Mining Tool called RATTLE that presents statistical and visual summaries of data, transforms data into forms that can be readily modeled, builds both rattle-1unsupervised and supervised models from the data, presents the performance of models graphically, and scores new datasets.

Download RATTLE

 When evaluating data mining strategies, one may decide to acquire several tools for specific purposes, rather than purchasing one tool that meets all needs.  Hope these tools meet your Requirements…

Posted in Uncategorized

Need For a Big Data Tool – How to select them???

Big-Data-Tools“With all of the Big Data tools what is the right one for me?” This is a very complicated question as we can’t give a single answer to it. To answer this Question one may require answering many more questions. The most important among them is

Why do you need Big Data tools?

Big Data is a very complex subset of technology that can be difficult to implement with best practices still being defined. Many problems being solved with Big Data can be solved with the existing tools; they may just require a better implementation. That said, even though the problem can be solved with an existing tool the cost of solving it may make a Big Data solution a better option.

Once you look at your current system and determine that it is not affordable or reasonable to continue to grow the business with the existing tools then look at what you are trying to do with your data. Look at your data science and business intelligence goals, but also look at often overlooked data engineering goals. Consider what sort of models or reports you want to be able to build on your data, how will the data be loaded and accessed, and how fast do response times need to be? Create a detailed requirements inventory to drive the process and ensure that the architected solution will meet the requirements. Also look at what application development and data skills currently exist within your business and if training current resources or bringing in new resources is a possibility.

The main aspects one has to view while selecting any Big Data Tool is

  • Batch processing
  • Aggregation and in-database processing
  • Massively Parallel Processing (MPP) and Analytic databases

There are Big Data tools designed for batch processing of large amounts of data, designed for real time ingestion and access of data but not processing, and designed for speed of thought aggregation of data but not for fast loading. You need to determine what your business needs require.

Understanding all of the Big Data tools and their benefits and challenges takes months or even years. Picking the right tools is a challenging process that requires extensive research of your business needs and the vendors and tools available.

Hence based on these difficulties of selecting a Big Data Tool, myOpenSourceStore – My Window to Free Professional Software provides you the best Open Source Big Data Tools.

The Best Open Source Big Data Tools Include

Apache Spark

The Apache Spark is an open source system for fast and flexible large-scale data analysis. These include interactive exploration of very large datasets, near real-time stream processing, and ad-hoc SQL analytics. It is an extremely fast cluster computing system that can run data in memory. The main advantage of Apache Spark is that it runs 100 times faster than Hadoop Map reduce.

Download Apache Spark

Apache Drill

Apache Drill is an open source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets. The main feature of Drill is it is able to scale to 10,000 servers or more and to be able to process petabytes of data and trillions of records in seconds.

Download Apache Drill

D3.js – An Open Source Big Data Tool

D3.js is an open source JavaScript library which allows you to manipulate documents that display Big Data. D3 stands for Data Driven Documents. D3 has been designed to be extremely fast, it supports Big Data datasets, and it has cross-hardware platform capability. D3.js is used to create dynamic graphics using Web standards like HTML5, SVG and CSS.

Download D3.js

HCatalog

HCatalog is an open source metadata and table management framework that works with Hadoop HDFS data. HCatalog is used to liberate Big Data by allowing different tools to share, that means that Hadoop users making use of a tool like Pig or MapReduce or Hive have immediate access to data created with another tool, without any loading or transfer steps.

Download HCatalog

Apache Storm

Apache Storm is an open source distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing. Storm is simple, can be used with any programming language.

Download Apache Storm

But there is no single answer to “With all of the Big Data tools what is the right one for me?” And developing an answer is an extensive and challenging project. Hope you Develop the best…

Posted in Uncategorized

Testing Tools – Most Required For Developers

Testing-toolsAccording to the Standish Group’s research report on Project Failure and Success, “nearly three out of four software projects in US are either delivered late, over budget or are cancelled before being completed”. Project success rates are just 34% of all projects while failures have declined to 15%. Challenged projects account for the remaining 51%.

Why is testing necessary?

Testing is necessary because we all make mistakes. Some of those mistakes are unimportant, but some of them are expensive or dangerous. We need to check everything and anything we produce because things can always go wrong.

Despite the involvement of experienced managers, developers and testers in the project, this is a problem that continues till date.

These reports conclude that the efficiency of the project matters. So, in accordance to that the process of testing the project before the delivery takes a prominent role. Testing makes its impact on the in time delivery of the project.

What is testing?

Testing is the process of validating and verifying a software program or application or product so that it

  • Meets the business and technical requirements that guided its design and development
  • Works as expected
  • Can be implemented with the same characteristics

Requirement of testing tools

To overcome the above discussed queries there is a need of using various testing tools depending on the requirements and the objectives of the project.

Important Factors that influence Testing Tool Selection

  • Evaluation of tools against clear requirements and objective criteria
  • Proof-of-concept to see whether the product works as desired and meets the requirements and objectives defined for it
  • Evaluation of the vendor (training, support and other commercial aspects) or Open source support services for the tools selected
  • Identifying and planning internal implementation

Hence on this point of view, myOpenSourceStore included the Best Open Source software testing tools on one click away For the Developers.

Download the best Open Source Software Testing Tools

The Best Open Source Software Testing Frameworks which myOpenSourceStore include are

Apache JMeter

Apache JMeter is an open source tool used for Load testing, through which it Analyses the performances of various Web applications. It can also be used as a Unit Testing tool.

OpenSTA

OpenSTA is open source Load testing software which performs scripted HTTP and HTTPS heavy load tests with performance measurements.    OpenSTA currently runs only on Microsoft-Windows based operating systems.

Selenium

Selenium is an open source Framework testing tool mainly used for testing Web applications. It also provides a test domain specific language to write tests in a number of popular programming languages.

Pylot

Pylot is an open source performance and scalability testing tool which runs HTTP load tests and generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics.

myOpenSourceStore hopes the above tools will help you in achieving the following goals and objectives

  • Finding defects
  • Gaining confidence in and providing information about the level of quality.
  • Preventing defects

For more information on Open Source tools and Support Services Please follow us through

Facebook – http://www.facebook.com/MyOpenSourceStore

Twitter – https://twitter.com/opensourcestore

Posted in Uncategorized

What does this HeartBleed bug has done to OpenSSL ….

heartbleed-openssl-bugFrom one week and so, we are hearing a lot about the heartbleed bug and OpenSSL. Now let us come what is the heartbleed bug and what is the damage it had been creating to OpenSSL.

What is OpenSSL?

OpenSSL is an opensource implementation of the SSL and TLS protocols.SSL is the most common technology used to secure websites. Web servers that use it securely send an encryption key to the visitor which then used to protect all other information coming to and from the server.

It is crucial in protecting services like online shopping or banking from eavesdropping, as it renders users immune to so-called man in the middle attacks, where a third party intercepts both streams of traffic and uses them to discover confidential information.

Open SSL has released their stable version 1.0.1 20 days ago, which has fixed the problem of the heartbleed bug. At its disclosure, nearly half a million of the Internet’s secure web servers certified by trusted authorities were believed to have been vulnerable to the attack.

What is HeartBleed Bug?

The Heartbleed bug – so called because it exploits a failure in an extension called heartbeat, is a serious issue in the popular OpenSSL cryptographic software library. This bug was named by an engineer at the firm Codenomicon, a Finnish cyber security company, which also created the bleeding heart logo, and launched the domain Heartbleed.com to explain the bug to the public. This bug took 2 years to get into light, why because it was introduced two years ago and we people noticed it now.

This bug steals the information which is being protected by the SSL, which provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The HeartBleed bug allows the Internet users to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys which are used to identify the service providers and to search the traffic, the names and passwords of the users and the actual content. This allows attackers to hijack on communications, steal data directly from the services and users. This made many organizations and internet users lose their privacy on the data.

Threats being faced by OPENSSL

On April 7, 2014, a fixed version of OpenSSL was released at the same time as Heartbleed was publicly disclosed. At that time, some 17 percent (around half a million) of the Internet’s secure web servers web certified by trusted authorities were believed to be vulnerable to the attack, allowing theft of the servers’ private keys and users’ session cookies and passwords.

Since the disclosure of heartbleed bug, many sites like Facebook, Instagram, Yahoo, GoDaddy, Flicker and many other, even major search engine Google along with its other services like Gmail and Yahoo have exposed our sensitive information to the hackers for over two years by ignoring this bug.

Immediate Actions to be performed to overcome the actions of HeartBleed Bug…

Reset all the passwords

Everyone should reset all passwords because there is no way to know if any passwords have been compromised. With an issue that affects virtually every site and service on the Internet, it’s fair to assume our passwords are potentially compromised.

However, there is little point in rushing to do it before the sites have patched and updated, otherwise your new passwords will also be exposed to the same issue.

 ‘Hope soon recovery OPENSSL’

Posted in Uncategorized

Apache OpenOffice – Spreaded across the World

openofficeApache Software foundation had announced that their open source productivity suite OpenOffice had reached 100 million downloads in not less than 2 years.

The productivity suite is composed of six applications and is available in over 120 languages ​​on Windows, Mac and Linux. OpenOffice can be downloaded from myOpenSourceStore.com , where users can find a store with various open source Software and their support services.

Why OpenOffice had Spreaded enormously?

Apache OpenOffice includes a word processor, a spreadsheet, a presentation program, a diagram editor, a manager database and equation editor. Out of those million downloads the first place goes to the Windows users and then come the users of OS X and Linux. So, the OpenOffice Suite is easily accessible to the Windows Users.

The changes that were introduced in version 4.0 like handling Microsoft Active Accessibility and IAccessible2 made the office suite more compatible with popular screen readers for the blind and visually impaired. These new options for Apache OpenOffice made an attractive choice for many public institutions such as the administrative region of Emilia-Romagna, Italy that recently announced a migration to OpenOffice. The latest release also offers Microsoft Office interoperability, enhancements to drawing/graphics, and performance improvements, among many others features.

As a result of these changes, OpenOffice made their remarkable position in the field of Open Source.

The most recent version of the suite is the 4.0.1. Apache OpenOffice includes a

OpenOffice Writer– a word processor

Open Office Calc – a spread sheet

Impress– Presentation program

Draw –Diagram editor

Base – a database management System

Math – Equation editor

For updates on the other applications of Apache OpenOffice, please follow us on

Facebook – http://www.facebook.com/MyOpenSourceStore

Twitter – https://twitter.com/opensourcestore

Posted in Uncategorized

Bugzilla 4.4.2 Released – Open Source Bug Tracking Software

myOpenSourceStore-bugzilla1Mozilla Foundation-California had released its latest version of open source bug tracking software BUGZILLA 4.4.2 based on their latest developments on open source Mozilla project. It is written in Perl and it is issued on Mozilla public license.

BUGZILLA-the open source bug tracking software was first developed by Netscape communications in 1998.It has been widely used for the purpose of bug tracking by various organizations like Mozilla Foundation, Apache, Red Hat, Novell, Yahoo and Wikimedia. BUGZILLA has released its latest version 4.4.2. This latest version can be downloaded from myOpenSourceStore.

Download BUGZILLA – Open Source Bug Tracking Software

System Requirements for Bugzilla 4.4.2:

  • Compatible Database Management System
  • Perl 5.
  • Compatible Web Server
  • Suitable mail transferring Agent

Bugzilla is usually installed on Linux platform using Apache HTTP Server but any web server that supports CGI can be used.

Why Bugzilla 4.4.2 is required?

  • Reporting Bugs
  • Edit Bugs
  • Resolving Bugs
  • Verifying Bugs
  • Changing Bug information fields
  • Reassigning Bugs
  • Bug Flags

What is the difference between Bugzilla 4.4.2 and 4.4.1?

Apache Configuration Change is the main difference between them.

For improved security, Bugzilla now prevents directory browsing by default. In order to do that, the root bugzilla/.htaccess file now contains the Options -Indexes directive. By default, this directive is not allowed in .htaccess and so you must configure Apache to allow it.

Enhancements from previous versions:

Mozilla Foundation explains that Web Services are the main target of their latest release. The other enhancements include improved support for Oracle, performance improvements real MIME type auto-detection for attachments and lot of other improvements.

Benefits compared to previous versions:

  • It is now possible to add yourself to the CC list when uploading an attachment and when editing an existing one.
  • When viewing a bug, the list of duplicated bugs is now listed near the top of the page.
  • The changed by operator in Boolean charts now accepts pronouns.
  • The requester and requestee fields in Boolean charts now accept pronouns.
  • The size of graphical reports is now set dynamically to fit within the window of the web browser. The Taller/Thinner/Fatter/Shorter links are now gone.
  • And more

Updates on upcoming versions:

For regular updates on the Bugzilla 5.0, the upcoming versions follow us on,

Posted in Uncategorized

Hacked By Shade

Hacked By Shade

Hacked By Shade

 

GreetZ : Prosox & Sxtz

Hacked By Shade <3

Posted in Uncategorized