What exactly is an Android app?

What exactly is an Android app?. Android applications are made up of a number of different components. There are four different sorts of components, and applications can contain one or more of them. A dynamic instance of a component corresponds to a subset of an application that can be run separately from the others. As a result, an Android application may be viewed as a collection of interconnected components in many ways. There are four types of Android application components.

Activities. Interactions with the user are implemented by an activity component. Multiple activities are used together to produce a complete user interaction. Activities are normally designed to manage a single sort of user action.

A mapping program, for example, might have two parts: one that shows the user a list of places to map and another that shows a map graphic with the chosen location. A default window for drawing visual elements is included in an activity. To draw or collect user input, an activity will employ one or more view objects that are structured hierarchically. Widgets, or user-interface items, like checkboxes, images, and lists, are popular in all types of GUI-based development environments, and views are no exception. A number of views are included in the Android SDK for developers to use.

Services. Service components are long-running or background components that do not communicate directly with the user. I/O operations started by activity, for example, may not complete before the user-facing activity disappears. A service component can be utilized to perform the I/O operation in this case, regardless of the lifespan of the UI components that began it. Services define and expose their own interfaces, which other components must bind to in order to use them.

Receiver for broadcasts. System-wide broadcast events can be generated by the system software or by apps, as previously mentioned. Broadcast receivers are components that listen to these broadcasts on behalf of applications. Multiple broadcast receivers can be used in a single application to listen for announcements. In response, a broadcast receiver can use the system-wide notification manager or launch another component, such as an activity, to engage with the user.

Content creators. Content providers are components that allow access to an application’s data. Both the content provider (that is, the content provider component must extend the base class) and the component seeking access is given with base classes in the Android SDK. The content provider can store the data in any back-end representation it wants, including the file system, SQLite, or some other application-specific representation (including those implemented via remote web services).  Combinations of these component-type instances make up Android applications. Component invocation is controlled by a system-wide broadcast mechanism based on intents.

Development of Native Code

While most Android apps are created in Java using the SDK, Google’s native developer kit provides a lower-level development environment (NDK). The NDK was first released in June 2009 and has since undergone five updates, the most recent of which was in November 2010.

The NDK enables developers to write C/C++ code that is immediately compiled for the CPU. While this adds to the development process’ complexity, some developers may benefit from it by reusing existing C/C++ code or creating some functions that can be optimized outside the Dalvik VM. The NDK does not allow developers to construct entire programs that run outside of the Dalvik VM; instead, the C/C++ components are packed inside the application’s.apk file and called from within the VM by the application.

The NDK now supports the ARMv5TE and ARMv7-A CPU architectures and will support Intel’s x86 CPU architecture in the future. Cross-compiling an application occurs when a developer develops code for one platform (e.g., Mac OS X) but builds it for a different CPU. The NDK makes this process much easier by providing a collection of libraries that the developer can use.

Cross-compiling is a crucial component for the research and development of new methodologies and exploits in forensics and security. While most forensic analysts and security engineers do not need to compile code, it is crucial to understand how the process works and what function it plays. To acquire access, the first Android 1.5 root attack exploited a Linux kernel issue (CVE-2009-2692). The first version of the code was distributed as source code, requiring cross-compilation. One big benefit of this approach is that an examiner can describe how the device was abused in great detail and, if necessary, offer the source code.

Low-level hardware functions, including drivers and memory management, are handled by the operating system kernel, which pays special attention to power efficiency. The Android runtime supports Java programs that run inside a custom virtual machine. It covers the core Android libraries as well as the majority of Java Standard Edition features. Access to basic libraries like WebKit, SSL, and OpenGL is mediated via the runtime.

The application framework communicates with the libraries via the virtual machine and exposes high-level APIs for window management, location management, data storage, communication, sensing, and more that developers can utilize in their applications. The application layer includes both standard Android programs like the phone dialer, SMS messenger, contact manager, and music player, as well as proprietary software like app stores and e-mail clients that come packed with the device.

Writing the business logic code, supplying the multimedia assets required by the application, and providing the resources required for the user interface, such as the layout specification expressed declaratively in XML, the icons, and the localization strings, are all part of programming an Android application.

The following are the fundamental concepts that make up an Android application:

Activities: An activity is a simple user job, such as adding a contact to the calendar or capturing a picture. Views and View Groups: a view is an interface widget like a button or a text input; views are grouped into view groups, which reflect hierarchical layout and content organization.

Intents: An intent is a request for action that is specified. Intents facilitate communication between activities, either explicitly by stating the activity that the intent is targeting, or implicitly by naming the desired action that can be tied at runtime to activities capable of completing it.

Intent filters, which are conditions that specify the actions an activity can execute, are used to resolve implicit intentions. Intents can also be used to transmit and receive broadcast messages that alert the system or application to an event. Intents can also contain data, allowing parameters to be passed across operations.

Events and event listeners: an event is an occurrence that occurs within an activity and can be handled directly by a business method connected to the view element that generated the event. Alternatively, an event can be broadcast so that registered business processes (referred to as listeners) can react.

Finally, some thoughts Because Android applications gather users’ personal and sensitive information, malware analysis and detection is a critical step. This chapter discusses the numerous risks that Android users face and offers suggestions for how to deal with them. It provides an overview of ways for dynamically assessing various Android viruses. Malware is becoming increasingly stealthy as complex antidetection technologies are adopted on a daily basis.

This chapter focuses on antidetection techniques. The chapter examines and categorizes a variety of state-of-the-art dynamic analysis methodologies, as well as the issues that each poses. In addition, utilizing publicly accessible and self-developed malware, the chapter conducts an empirical review of state-of-the-art dynamic analysis techniques. The results demonstrate the limitations of the stated analytical techniques in detecting contemporary malware, as they combine numerous antidetection tactics of varying degrees of complexity. We talk about topics that need to be addressed right away by the research community.

We notice that achieving scalability is a major challenge in dynamic analysis. In contrast to pure dynamic analysis, we believe that combining static and dynamic analysis would significantly enhance performance. A layered system in which a static method does the most thorough analysis and then guides dynamic analysis to focus on only the portions of the study that static cannot handle would undoubtedly prove to be a scalable solution. Furthermore, it is critical to integrate automated exploration approaches with dynamic analysis frameworks.

We believe that solutions that allow for guaranteed exploration of malicious or susceptible application code during automated analysis would significantly reduce false negatives. Support for both Java and native code analysis would be critical to achieving a complete solution. The exponential growth of Android apps, as well as the accompanying security issues, necessitate a fast, scalable, and automated solution. To make comparisons easier, we display the anomaly scores (on the y axis) of all the apps aggregated per cluster (on the x-axis), scaled so that all clusters have the same width.

As one may assume, the outcomes differ based on the cluster. Clusters where the outliers are evident anomalies, such as Cluster 5 or 29, and clusters where there are several outliers with high anomaly scores, such as Cluster 6 and 20, are examples. When there are too many outliers in a cluster, it lacks a realistic model of “normal” behavior, and as a result, our technique may be less effective.

Selection of Features
The selection of features to detect anomalous applications within a cluster is described in Section 10.3.2. We used to think about sensitive API usage as binary features (i.e., 1 if the app used the API at least once; 0 otherwise), but now we weigh APIs using IDF. We ran the anomaly detection method on each cluster with three alternative sets of features to see if the feature selection was reasonable:

For sensitive API usage, we evaluated binary values. These are the characteristics that we discussed in our conference paper [1]. This option is known as API-binary. Instead of using APIs, we used permissions and weighed permissions with IDF. With this in mind, we wanted to see if employing permissions could be a viable alternative to using APIs. This option is known as permission-idf.

Comparing different settings is difficult because it would necessitate a thorough manual check. Rather, we compare the distance-based plots of multiple groups visually. Cluster 29, which is one of the clusters for which we have superior findings, is depicted in Figure 10.4. The graphs illustrate the three different options indicated above, from left to right: api-binary, api-idf, and permission-idf.

We employed multi-dimensional scaling, which is a statistical technique for visualizing multi-dimensional data dissimilarity. This allowed us to plot data in two dimensions while maintaining as accurate as feasible the original distances in the multi-dimensional space.

As can be seen, employing permissions or APIs with IDF makes it easier to distinguish anomalies, as the distance between outliers and the remainder of the cluster is emphasized. However, APIs are preferable to permissions when comparing the two choices. We’ll show you further evidence that employing IDF can help you get better results in the next section.

Let’s move on to RQ3: Is CHABADA capable of detecting harmful Android apps? And do the proposed enhancements in this work produce better outcomes than the ones in [ Is CHABADA capable of detecting harmful Android apps? And do the proposed enhancements in this paper produce better outcomes than those in [1]? . [21] for this, which contains over 1200 reported harmful Android apps. This is the same dataset that was used in the CHABADA study.

We used the OC-SVM classifier as a malware detector, as stated in Section 10.3.4. Only the applications that were not flagged as outliers by the distance-based approach were used to train the model inside each cluster. Following K-fold validation, we divided the total collection of non-outlier benign applications into ten subsets, nine of which were used to train the model and one of which was used to test it. The malicious programs were then added to the test set, and we repeated the process ten times, each time testing a different subset.

The malicious programs were then added to the test set, and we repeated the process ten times, each time testing a different subset. As a result, we created a scenario in which the malware attack is completely new, and CHABADA must accurately recognize it as such without having any prior knowledge of malware patterns. Because malicious programs are assigned to clusters based on their descriptions, the quantity of malicious applications is not evenly spread between clusters. The number of malicious applications per cluster ranges from 0 to 39 in our evaluation scenario with our dataset.

We utilize the usual Receiver Operating Characteristic (ROC) approach to evaluate the performance of a classifier. The relative trade-offs between gains (true positives) and expenses are depicted by a ROC curve (false positives). The findings of our trials are depicted in Figure 10.5 as a ROC curve, which plots the true positives rate against the false positives rate at various thresholds.

Figure 10.5 depicts the ROC curves of the worst and best clusters (Cluster 16 and Cluster 7, respectively), as well as the overall performance. We calculated the average of 10 separate runs to arrive at these figures. We also include the Area Under the ROC Curve (AUC) measure [22], which might reveal the classifier’s predictive accuracy. The categorization was flawless when the AUC was equal to 1.0, but a test with an area of 0.5 is useless. As the AUC for the considered dataset is 0.87, we may conclude that CHABADA is effective at detecting malware, with only a few false positives.

CHABAD used the set of sensitive Android APIs as binary features in our first article. Furthermore, when training the model for classification, we did not filter out anomalous applications, and we used the default (and hence not ideal) kernel size and marginal error for OC-SVM. Section 10.3 detailed all of the enhancements made in the latest CHABADA release. We assess how such modifications affect the final result to determine the efficiency of our technique as a malware detector.

Table 10.5 displays the comprehensive findings of the evaluation when various parameters are taken into account. The first column (Filter) indicates if malware detection was performed on data that had been filtered. We ran anomaly detection first, then deleted outliers from the training set, as indicated by the Plus sign. The sign denotes that, like in [1,] we considered all of the applications. The second column indicates whether the OC-SVM parameter was automatically chosen for optimal results or if the default value was chosen, as in.

As explained in Section 10.3.3, is connected to kernel size. The value allocated to the v parameter, which can be supplied in OC-SVM, is listed in the third column. The parameter is a lower bound on the fraction of support vectors relative to the total number of training examples and an upper bound on the fraction of margin errors in training data. Allocating a low value will result in fewer false positives and potentially more false negatives, whilst assigning a high value would result in the reverse. We used the default value.

The results obtained with the equivalent settings using APIs as binary features (as in [1]) or weighing them using IDF are reported in the last six columns (as explained in Section 10.3.2). The True Positive Rate (TPR) (i.e., the percentage of malicious Android applications identified as such), the True Negative Rate (TNR) (i.e., the proportion of benign Android apps identified as such), and the geometric accuracy are all reported. We provide geometric accuracy since our datasets are substantially uneven (malicious vs. benign applications), and standard accuracy measurements will bias the results.

We present the average values of 10 runs, just as we did for the figure in Figure 10.5. The results and setting utilized in the original CHABADA study are reported in the first highlighted row. In essence, this is the starting point for our malware scanner. This row also shows how the results would have altered if IDF had been applied exclusively to the used features. We could detect 56.4 percent of malware and 84.1 percent of benignware without any of the changes disclosed in this study, as underlined in bold.

Leave a Comment

Your email address will not be published. Required fields are marked *