<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>The CSI Journal on Computer Science and Engineering</title>
    <link>https://www.csionjcse.ir/</link>
    <description>The CSI Journal on Computer Science and Engineering</description>
    <atom:link href="" rel="self" type="application/rss+xml"/>
    <language>en</language>
    <sy:updatePeriod>daily</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <pubDate>Thu, 01 Jan 2026 00:00:00 +0330</pubDate>
    <lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0330</lastBuildDate>
    <item>
      <title>New Goal-Oriented Approach to Assess Main Virtual Organization’s Elements Integrated with Model-Based Methods</title>
      <link>https://www.csionjcse.ir/article_228082.html</link>
      <description>For a sufficient understanding of the status of the virtual organization (VO), assessment from viewpoints is required. The evaluation of the VO includes the evaluation of the entire VO, the assessment of partner organizations in VO, the evaluating of projects in VO, the evaluating of the progress of VO projects, the evaluation of the VO improvement in different periods, the assessment of capabilities, the assessment of a service-group, and etcetera. Understanding an accurate evaluation of the VO is an impressive factor in more effective VO management and helps VO achieve better to the VO mission and objectives. This research represents a new goal-oriented procedure for service-oriented (SO) VO assessment. This method can assess VO from the different types mentioned above. This method acts as part of COBEAM (collaborative enterprise architecture method) and works consistently with many goal-oriented service-oriented concept-based methods.</description>
    </item>
    <item>
      <title>Performance Improvement of CIGS Thin-Film Solar Cells by Incorporating an ITO Photoanode and Designing a Structured Back Layer</title>
      <link>https://www.csionjcse.ir/article_228599.html</link>
      <description>Copper Indium Gallium Selenide (CIGS) thin-film solar cells are well known for their intrinsic stability and suitable direct bandgap, making them promising candidates for high-efficiency photovoltaic applications. This study aims to improve the performance of CIGS solar cells by incorporating pitch-like gaps in the back contact layer to reduce minority carrier recombination. Using Silvaco TCAD simulations, the effects of various gap widths and material properties on cell efficiency were analyzed. Results indicated that an optimal gap width of 200 nm, combined with increased doping concentration and charge density in the absorber layer above 1 cm⁻&amp;amp;sup3;, significantly boosted performance. Additionally, tuning the doping profiles of other layers mitigated the adverse effects of the back layer's high resistivity and low conductivity. Furthermore, integrating a transparent ITO photoanode improved front-side light transmission. Collectively, these design modifications increased the device efficiency from 20.8% to 27.5%, demonstrating a substantial improvement in both optical and electrical characteristics of the CIGS solar cell architecture.</description>
    </item>
    <item>
      <title>Comparing Time-Series Analysis Approaches Utilized in Research Papers to Forecast COVID-19 Cases in Africa: A Literature Review</title>
      <link>https://www.csionjcse.ir/article_229701.html</link>
      <description>This literature review aimed to compare various time-series analysis approaches utilized in forecasting COVID-19 cases in Africa. The study involved a methodical search for English-language research papers published between January 2020 and July 2023, focusing specifically on papers that utilized time-series analysis approaches on COVID-19 datasets in Africa. A variety of databases including PubMed, Google Scholar, Scopus, and Web of Science were utilized for this process. The research papers underwent an evaluation process to extract relevant information regarding the implementation and performance of the time-series analysis models. The study highlighted the different methodologies employed, evaluating their effectiveness and limitations in forecasting the spread of the virus. The result of this review could contribute deeper insights into the field, and future research should consider these insights to improve time series analysis models and explore the integration of different approaches for enhanced public health decision-making.</description>
    </item>
    <item>
      <title>Brain Tumor Segmentation Based on Deep Learning Using Multimodal MRI Images</title>
      <link>https://www.csionjcse.ir/article_233430.html</link>
      <description>Accurate segmentation and localization of brain tumors from magnetic resonance imaging (MRI) scans remain one of the major and ongoing challenges in the field of medical image analysis. This task is critically important, as it directly impacts clinical diagnosis, surgical planning, and treatment strategies for patients. The heterogeneous nature of brain tumors, along with their varying sizes, shapes, and locations, makes automated segmentation particularly complex. To address this, recent advanced methodologies commonly incorporate multiple MRI modalities&amp;amp;mdash;such as T1-weighted, contrast-enhanced T1 (T1c), T2-weighted, and FLAIR images&amp;amp;mdash;each of which offers complementary information regarding different tissue characteristics. These multi-modal approaches have significantly improved segmentation performance by providing a more comprehensive understanding of the tumor&amp;amp;rsquo;s structure. However, despite the promising results achieved on benchmark datasets like BRATS 2018, many state-of-the-art methods rely on deep architectures with high computational complexity, which can hinder their deployment in real-time or resource-constrained clinical environments. To overcome these limitations, this study introduces a novel deep learning-based framework tailored specifically for brain tumor segmentation. Extensive experiments conducted on the BRATS 2018 dataset reveal that the proposed approach not only surpasses existing models in terms of accuracy but also shows strong generalization capability and robustness when segmenting complex and irregular tumor boundaries, making it a promising tool for real-world clinical applications.</description>
    </item>
    <item>
      <title>Porosity Evaluation Using Artificial Neural Network, Optimized with GA and PSO</title>
      <link>https://www.csionjcse.ir/article_233781.html</link>
      <description>The precise evaluation of porosity is fundamental to reservoir characterization and volumetric assessment.While direct measurements from core analysis provide accurate results, they are economically and operationally prohibitive for continuous formation evaluation. This study presents a robust machine learning framework that employs Artificial Neural Networks (ANNs)&amp;amp;mdash;Multilayer Perceptron (MLP) and Radial Basis Function (RBF)&amp;amp;mdash;to predict porosity from conventional well logs. To maximize predictive accuracy and convergence efficiency, the models are optimized using two metaheuristic algorithms: the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). Applied to a dataset from a carbonate reservoir in South-West Iran, the hybrid models (MLP-GA, MLP-PSO, RBF-GA, RBF-PSO) demonstrate a superior performance compared to their non-optimized counterparts. The optimization led to a significant increase in the correlation coefficient (R) and a substantial reduction in the Mean Square Error (MSE) for both vertical and horizontal porosity estimates. This research conclusively establishes that the synergy of ANNs with evolutionary optimizers offers a reliable, cost-effective, and rapid solution for porosity prediction, with strong potential for broader application in petrophysical property estimation.</description>
    </item>
    <item>
      <title>Fully Dispersed Haar-like Filters for Enhanced Facial Feature Extraction and Recognition</title>
      <link>https://www.csionjcse.ir/article_233203.html</link>
      <description>&amp;amp;lrm;Haar-like filters are well known for their simplicity&amp;amp;lrm;, &amp;amp;lrm;speed&amp;amp;lrm;, &amp;amp;lrm;and accuracy in various computer vision tasks&amp;amp;lrm;. &amp;amp;lrm;This paper proposes a novel algorithm to identify the optimal fully dispersed Haar-like filters for enhanced facial feature extraction&amp;amp;lrm;. &amp;amp;lrm;Unlike traditional Haar-like filters&amp;amp;lrm;, &amp;amp;lrm;the proposed filters allow pixels to move freely within an image&amp;amp;lrm;, &amp;amp;lrm;thereby capturing intricate local features more effectively&amp;amp;lrm;. &amp;amp;lrm;Extensive experiments on face detection and facial expression recognition demonstrate that the optimized filters can distinguish between face images and clutter with minimal error&amp;amp;lrm;, &amp;amp;lrm;thereby outperforming existing algorithms&amp;amp;lrm;. &amp;amp;lrm;By leveraging a dataset-driven approach to optimize filter weights&amp;amp;lrm;, &amp;amp;lrm;the proposed method achieves high accuracy in facial feature extraction&amp;amp;lrm;, &amp;amp;lrm;making it a promising tool for various computer vision applications&amp;amp;lrm;. &amp;amp;lrm;The MATLAB code corresponding to the proposed algorithm is available at https://github.com/Sedaghatjoo/fully-dispersed-Haar-like-filter&amp;amp;lrm;.</description>
    </item>
    <item>
      <title>Using deep reinforcement learning networks to provide dynamic scheduling in workplace environments</title>
      <link>https://www.csionjcse.ir/article_234698.html</link>
      <description>The dynamic scheduling problem in modern work environments, such as data centers and cloud computing systems, is considered one of the complex challenges in the field of system optimization due to the heterogeneous and variable nature of input tasks. Traditional and static scheduling methods such as FIFO and SJF, due to their inability to adapt to the real-time and dynamic conditions of the environment, do not show the necessary efficiency for optimizing key performance metrics. In this paper, an asynchronous conditional policy factoring algorithm is presented for dynamic scheduling. This algorithm, by utilizing the policy factoring mechanism, enables learning complex and coordinated policies and improves the convergence speed and efficiency of the learning process by using asynchronous updates. This approach allows the system to effectively deal with the uncertainty and dynamics of the environment and allocate resources optimally. The experimental results clearly demonstrated the absolute superiority of the proposed algorithm in all evaluation criteria, including total task completion time and average task waiting time. The proposed algorithm was able to reduce Makespan by 5% and average waiting time by 13% compared to the best reference algorithm, namely QMIX.</description>
    </item>
    <item>
      <title>Two-Way Intelligent Trust Management Using Machine Learning and Subjective Logic in FOG</title>
      <link>https://www.csionjcse.ir/article_236792.html</link>
      <description>The rapid proliferation of the Internet of Things (IoT) has intensified concerns regarding trust, security, and reliability in large-scale, dynamic, and resource-constrained environments. Conventional trust management approaches&amp;amp;mdash;primarily based on reputation, rule-based reasoning, or static evidence aggregation&amp;amp;mdash;struggle to adapt to evolving behaviors and sophisticated attacks such as collusion and Sybil attacks. Moreover, most existing solutions adopt a one-way trust model, neglecting the inherently mutual nature of trust between IoT devices and fog infrastructures. This paper proposes TWITMF, a Two-Way Intelligent Trust Management Framework that integrates machine learning (ML) with Subjective Logic (SL) to enable adaptive, uncertainty-aware, and bidirectional trust evaluation in IoT&amp;amp;ndash;fog environments. In the proposed framework, lightweight ML models learn behavioral patterns and generate predictive trust evidence, while Subjective Logic explicitly models uncertainty and fuses both direct and indirect evidence into interpretable trust opinions. Unlike ML-only approaches that produce point estimates, TWITMF treats ML outputs as evidence rather than final decisions, allowing robust trust reasoning under sparse or conflicting observations. The framework supports mutual trust assessment, enabling both IoT devices and fog nodes to evaluate each other prior to interaction. Extensive simulation-based experiments conducted in a fog-enabled IoT environment demonstrate that TWITMF significantly outperforms reputation-based, ML-only, and SL-only baselines. The proposed framework achieves up to 95% F1-score, reduces detection latency, and exhibits strong resilience against coordinated collusion and Sybil attacks, while maintaining low computational overhead suitable for real-time deployment. These results confirm the effectiveness of combining data-driven learning with uncertainty-aware reasoning for secure and reliable trust management in next-generation IoT applications such as smart cities, healthcare monitoring, and intelligent transportation systems</description>
    </item>
    <item>
      <title>Online Multi-Object Tracking Using Convolutional Neural Networks and the Invasive Weed Optimization Algorithm</title>
      <link>https://www.csionjcse.ir/article_235702.html</link>
      <description>Multi-object tracking (MOT) is a fundamental problem in computer vision with critical applications in areas such as video surveillance, human-computer interaction, autonomous driving, and video analytics. The main objective of MOT is to estimate the motion trajectories of multiple objects across sequential video frames while preserving their consistent identities throughout the sequence. MOT algorithms are generally categorized into two types: online methods, which process each frame sequentially and make tracking decisions in real-time, and offline methods, which process the entire video or segments of it as a batch to improve accuracy. In this study, we propose an online multi-object tracking method based on convolutional neural networks (CNNs). Unlike traditional approaches with fixed architectures, our method dynamically optimizes the number of hidden layers in the ANN using the Invasive Weed Optimization (IWO) algorithm, a nature-inspired metaheuristic optimization technique. This optimization aims to minimize the classification error, thereby enhancing the tracking performance by selecting a network architecture that is best suited to the complexity of the input data. The proposed system is evaluated using the VS-PETS 2009 benchmark dataset, a widely used dataset for evaluating object tracking algorithms. All simulations and model training are carried out in the MATLAB environment. The experimental results indicate that the proposed method achieves superior tracking accuracy and identity preservation performance compared to conventional tracking methods, demonstrating the effectiveness of combining ANNs with IWO in real-time multi-object tracking scenarios.</description>
    </item>
    <item>
      <title>MetaRecall: An Ensemble Classifier with Dynamic Base Classifier Selection and Ordering</title>
      <link>https://www.csionjcse.ir/article_235564.html</link>
      <description>The main criteria to evaluate different classifiers are accuracy of classification, time taken to build the classifier and classificationtime as well as generalizability of the classifier. In this paper, we propose a novel ensemble classifier, named MetaRecall, whichexploits confusion matrix to automatically select its base classifiers to increase accuracy of ultimate classification. To do so,a set of base classifiers as well as the training data set is fed to the algorithm as its input and the output of the algorithm is aensemble classifier which contains a subset of the given base classifiers. Each involved classifier in MetaRecall corresponds to aclass in the given dataset and its task is to classify instances of its corresponding class. To evaluate performance of MetaRecall,we do extensive experiments on different well-known benchmark datasets. In addition, we compare MetaRecall with the mostcommonly used previous ensemble classifiers. The results show that MetaRecall outperforms the previous classifiers in terms ofaccuracy and execution time in many cases.</description>
    </item>
    <item>
      <title>Performance Benchmarking of Load Balancers and Service Brokers in Heterogeneous Clouds using CloudAnalyst</title>
      <link>https://www.csionjcse.ir/article_236476.html</link>
      <description>Cloud computing's foundational challenge lies in efficiently distributing user requests across geographically distributed data centers (DCs) and subsequently across heterogeneous virtualized resources. This review paper systematically analyzes two critical components of this process: Service Broker Policies (SBPs), which determine DC selection, and Load Balancing (LB) Algorithms, which manage task distribution within Virtual Machines (VMs) in a DC. The study provides a detailed taxonomy and analysis of prevalent policies and algorithms, evaluating their mechanisms, strengths, and weaknesses. Furthermore, it presents a rigorous empirical performance analysis using the CloudAnalyst simulation tool. Three representative algorithms Round Robin (RR), Equally Spread Current Execution (ESCE), and Throttled LB (TLB) are evaluated under two distinct SBP: Closest DC and Optimized Response Time. The simulation models a realistic, heterogeneous cloud infrastructure subjected to dynamically varied workloads. Empirical results demonstrate that the TLB algorithm provides the most robust performance, characterized by highly competitive average response times and exceptional stability, as evidenced by its consistently minimal maximum latency. This is achieved through a proactive, capacity-aware distribution strategy that strategically directs workloads toward high-performance VMs. In stark contrast, the static RR algorithm proves fundamentally unsuitable for heterogeneous environments, incurring severe performance degradation and unstable response times due to its inability to account for disparate VM capacities. Although the SBP exerts a measurable influence on performance, the analysis conclusively establishes that the selection of the load balancing algorithm is the paramount factor in optimizing overall system performance, quality of service (QoS), and resource utilization in heterogeneous cloud environments.</description>
    </item>
    <item>
      <title>Modeling the Impact of Soil Liquefaction on Structural Stability Using an Artificial Neural Network Optimized by NSGA-III</title>
      <link>https://www.csionjcse.ir/article_235566.html</link>
      <description>Soil liquefaction is one of the most critical geotechnical phenomena that can severely compromise the stability and performance of engineering structures during seismic events. Accurate prediction of liquefaction potential and its subsequent effects on structural stability remains a complex and nonlinear problem influenced by multiple interdependent soil and seismic parameters. In this study, an Artificial Neural Network (ANN) model is developed to model and predict the impact of soil liquefaction on structural stability using a comprehensive set of geotechnical and seismic input features, including groundwater depth, shear wave velocity (Vs30), standard penetration test (SPT) results, and peak ground acceleration (PGA). To enhance the predictive performance and generalization capability of the ANN, its hyperparameters and network architecture are optimized through the Non-dominated Sorting Genetic Algorithm III (NSGA-III), which allows for simultaneous optimization of multiple conflicting objectives such as prediction accuracy and model complexity. The optimized ANN demonstrates superior performance in classifying liquefaction and non-liquefaction cases, achieving high accuracy and robustness across validation datasets. Moreover, the proposed hybrid NSGA-III&amp;amp;ndash;ANN framework provides a reliable and efficient computational approach for evaluating the influence of liquefaction on structural stability, offering valuable insights for seismic design, risk assessment, and mitigation strategies in geotechnical engineering.</description>
    </item>
    <item>
      <title>Application of Optimized Artificial Neural Networks for Predicting Reservoir Permeability</title>
      <link>https://www.csionjcse.ir/article_234693.html</link>
      <description>Permeability is one of the most important reservoir properties, which indicates the ability of fluids to flow through the pore spaces of the rock. determining permeability in processes such as predicting real reserve, producing and developing oil reservoirs seems essential. In the oil industry, permeability is usually measured using core analysis, well testing, and empirical correlations. The conventional methods of core analysis and well testing are too time-consuming and expensive. also, there data are not provided for every well. On the other hand, empirical correlations are used for special cases and are not accurate for every situation. Due to time-related and financial limitation, developing a method for measuring petrophysical properties such as: permeability based on well logging data (well logging data are available for almost every well) could be significant. An alternative approach for evaluating permeability is the use of artificial intelligence and machine learning tools. In this study, the method of data mining has been applied to calculate reservoir permeability by applying petrophysical data, at first, the data had been normalized and then horizontal and vertical permeability of an Iranian reservoirs were calculated using geophysical data and the methods of multiple layer perceptron Neural Network, PSO and GA. The comparison of these methods showed that combining MLP with either PSO or GA yields the best results.</description>
    </item>
    <item>
      <title>Automatic Analog IC Layout with CNN-Based Placement Using SqueezeNet and Multi-Objective Routing via DE and NSGA-III</title>
      <link>https://www.csionjcse.ir/article_240031.html</link>
      <description>Abstract: The layout design of analog integrated circuits (ICs) is a challenging and time-consuming task, requiring manual effort to extract geometric constraints such as symmetry and proximity. This paper presents a novel approach for automatic layout generation, combining the power of convolutional neural networks (CNNs) with transfer learning, specifically using the pre-trained SqueezeNet model to extract these constraints from schematic images. By applying fine-tuning, the CNN model can effectively identify and extract the necessary geometric relationships, eliminating the need for manual extraction. For the routing process, a multi-objective optimization strategy is employed using the non-dominated sorting genetic algorithm III (NSGA-III), where wire length, via count, and arch segments are minimized. To address the challenge of generating an effective initial population for NSGA-III, we introduce the differential evolution (DE) algorithm as a method for generating high-quality initial solutions, enhancing the convergence speed and solution quality. The proposed methodology is applied to the layout design of a two-stage operational amplifier (op-amp), with simulations conducted using MATLAB and validated in Cadence software on a 0.18μm CMOS process at a 1.8 V supply voltage. The results show that the proposed method signif-icantly outperforms existing techniques in terms of layout efficiency, performance, and automation in the design process.</description>
    </item>
  </channel>
</rss>
