Module 11: Solutions and Colloids

Discussion: solubility experiment, try this experiment out at home.

  • Another liquid you can see through (dish soap, canola oil, rubbing alcohol, clear soda, etc.)
  • 3 small cups (something clear is easiest)
  • Measuring cups/spoons
  • Stirrers (anything is fine)
  • Table sugar
  • Other sweeteners (i.e. Stevia)
  • Powdered chalk
  • Baking soda
  • Corn starch
  • Spices (i.e. pepper, paprika, etc.)
  • Powdered Jello mix
  • Thermometer (optional)
  • Scale (optional)
  • Read the instructions all the way through before beginning!
  • Label a cup for each powdered compound you’ve gathered.
  • Fill each cup with 1/2 cup of water. Then add 1 teaspoon of the powder to be tested to the cup. Stir carefully and observe. Does it dissolve? Record your observations.
  • Keep adding the powder slowly 1/2 or 1 teaspoon at a time and stirring until it doesn’t dissolve anymore (1/2 tsp increments will give you a more accurate measurement). Record how much you add.
  • Repeat for all of your powdered compounds.
  • Wash out your three cups or get three new ones – all of these should be safe to dispose of down the kitchen sink into the trash.
  • Now fill a cup with 1/2 cup of the other liquid (solvent). Select one of your powdered compounds and add 1 teaspoon to the cup. Stir carefully and observe. Keep adding the powder slowly 1/2 or 1 teaspoon at a time and stirring until it doesn’t dissolve anymore. Keep track of how much you add and record your observations.
  • Feel free to complete with any combination of the above solvent and solute
  • All of these should be safe to dispose of down the kitchen sink into the trash. Carefully clean up your space.

***If you want to be more scientific in your data collection: take and record the temperatures of the water and the other liquid, add the powdered compounds by weight (grams) instead of teaspoon measurements***

Observations/Data:

Post your results to compare the solubility of the materials you decided to use. If you can, make a table to report your data. Include any observations you made throughout the experiment. Make some conclusions based on your knowledge of solubility and the materials used – explain why you think your results turned out the way they did.

  • Discussion: Solubility Experiment. Provided by : Lumen Learning. License : Public Domain: No Known Copyright
  • Solubility Experiment
  • After School Activities

Share This Activity

Een-2-800x400-01

Have you ever wondered why some substances dissolve in water while others don’t? This activity visually demonstrates the science behind solubility.

Have you ever wondered why some substances dissolve in water while others don’t? The answer: solubility.

Solubility is the ability of a solid, liquid, or gaseous chemical substance (or solute) to dissolve in a solvent (usually a liquid) and form a homogenous solution. There are three factors that affect solubility:

  • Solvent: To determine whether a solute will dissolve in a solvent, remember this saying: “Like dissolves like.”
  • Temperature: This factor affects the solubility of both solids and gases, with solubility increasing with the temperature.
  • Pressure: This factor affects the solubility of gases, with solubility also increasing with pressure.

The Science Behind Solubility

Put simply, a substance is considered to be soluble if it can be dissolved, most commonly in water. When a solute, such as table salt, is added to a solvent, such as water, the salt’s molecular bonds are broken before combining with the water, causing the salt to dissolve.

However, for the salt to remain soluble and dissolve completely, there must be a larger concentration of water than salt in the solution. A solution becomes saturated when the solvent can dissolve no more solute. But adding heat or pressure can help to increase the solubility of the solute, depending on its state.

Check out Chemistry Rocks! 3 Simple Science Experiments To Teach Students Chemistry for more activity ideas!

Testing the Solubility of Substances

For this experiment, your students will explore basic chemistry concepts by testing the solubility of different substances in water. From the example above, we know that table salt is highly soluble in water. What other substances can dissolve in water?

What You Need

  • Clear containers, such as cups, beakers, or bowls
  • Materials to test, such as sugar, sand, chalk, baking soda and Epsom salts
  • Stirring rods
  • Measuring spoon
  • STEM journals (optional)
  • Begin by discussing the science of solubility, and have students write down their predictions about which materials are soluble or insoluble. Students can also document the scientific process in their STEM journals.
  • Fill each container with lukewarm tap water.
  • Add a specific amount—for example, 1 tablespoon—of a test material to a container using the measuring spoon. Repeat, adding an equal amount of a different material to each container of water.
  • Use the stirring rods to mix the contents in each container.
  • Observe which materials dissolved in the water and which did not. Did students make the right predictions?

Discussion Questions

Once the experiment is complete, use the following questions to deepen students’ understanding of solubility and how it works:

  • What are the qualities of the soluble materials versus those of the insoluble materials? For example, the soluble materials are likely powdery and dry, while insoluble materials may have a hard, grainy texture.
  • For the materials that dissolved in the water, what do you think will happen if you keep adding more to the water?
  • What are other examples of soluble substances?
  • What are other examples of insoluble substances?

Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut

Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis. Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis. Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis. Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis.

Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis. Lorem ipsum Quis aliquip aute in culpa est laboris anim irure proident tempor enim ut exercitation in id occaecat dolor cupidatat cillum in velit tempor cupidatat aute et proident quis ex Duis.

Recommended Products

{image:text}

Related Activities

7 spooky science experiments to try this halloween.

Halloween is the perfect time to explore fun and engaging science experiments! From exothermic reactions to blood composition, students can learn important science concepts while getting in the Halloween spirit. We’ve…

Shutterstock 594263666-2

A Bill Nye-Approved Experiment to Test the Efficacy of Masks

The Centers for Disease Control and Prevention (CDC) recommends the use of cloth face coverings, and more than 20 states have mask requirements in place. Still, many people question whether…

Exploding baggie science

Your browser is not supported

Sorry but it looks as if your browser is out of date. To get the best experience using our site we recommend that you upgrade or switch browsers.

Find a solution

  • Skip to main content
  • Skip to navigation

conclusion of solubility experiment

  • Back to parent navigation item
  • Primary teacher
  • Secondary/FE teacher
  • Early career or student teacher
  • Higher education
  • Curriculum support
  • Literacy in science teaching
  • Periodic table
  • Interactive periodic table
  • Climate change and sustainability
  • Resources shop
  • Collections
  • Post-lockdown teaching support
  • Remote teaching support
  • Starters for ten
  • Screen experiments
  • Assessment for learning
  • Microscale chemistry
  • Faces of chemistry
  • Classic chemistry experiments
  • Nuffield practical collection
  • Anecdotes for chemistry teachers
  • On this day in chemistry
  • Global experiments
  • PhET interactive simulations
  • Chemistry vignettes
  • Context and problem based learning
  • Journal of the month
  • Chemistry and art
  • Art analysis
  • Pigments and colours
  • Ancient art: today's technology
  • Psychology and art theory
  • Art and archaeology
  • Artists as chemists
  • The physics of restoration and conservation
  • Ancient Egyptian art
  • Ancient Greek art
  • Ancient Roman art
  • Classic chemistry demonstrations
  • In search of solutions
  • In search of more solutions
  • Creative problem-solving in chemistry
  • Solar spark
  • Chemistry for non-specialists
  • Health and safety in higher education
  • Analytical chemistry introductions
  • Exhibition chemistry
  • Introductory maths for higher education
  • Commercial skills for chemists
  • Kitchen chemistry
  • Journals how to guides
  • Chemistry in health
  • Chemistry in sport
  • Chemistry in your cupboard
  • Chocolate chemistry
  • Adnoddau addysgu cemeg Cymraeg
  • The chemistry of fireworks
  • Festive chemistry
  • Education in Chemistry
  • Teach Chemistry
  • On-demand online
  • Live online
  • Selected PD articles
  • PD for primary teachers
  • PD for secondary teachers
  • What we offer
  • Chartered Science Teacher (CSciTeach)
  • Teacher mentoring
  • UK Chemistry Olympiad
  • Who can enter?
  • How does it work?
  • Resources and past papers
  • Top of the Bench
  • Schools' Analyst
  • Regional support
  • Education coordinators
  • RSC Yusuf Hamied Inspirational Science Programme
  • RSC Education News
  • Supporting teacher training
  • Interest groups

A primary school child raises their hand in a classroom

  • More navigation items

Sulfate and carbonate solubility of Groups 1 and 2

  • No comments

Try this microscale practical to explore the properties of elements in Groups 1 and 2 as they form various precipitates

In this experiment, students add drops of sulfate and carbonate solutions to Group 1 or 2 metal ions and see whether any precipitates form. They observe that no precipitates form in Group 1, indicating that Group 1 carbonates and sulfates are soluble, while the behaviour of Group 2 is more variable.

The practical should take approximately 20 minutes.

  • Eye protection
  • Student worksheet (available for download as a PDF or editable Word document below)
  • Clear plastic sheet (eg ohp sheet)

Solutions should be contained in plastic pipettes. See the accompanying  guidance on apparatus and techniques for microscale chemistry , which includes instructions for preparing solutions.

  • Magnesium nitrate, 0.5 mol dm –3
  • Calcium nitrate, 0.5 mol dm –3
  • Strontium nitrate, 0.5 mol dm –3
  • Barium nitrate, 0.2 mol dm –3
  • Lithium bromide, 1 mol dm –3
  • Sodium chloride, 0.5 mol dm –3
  • Potassium bromide, 0.2 mol dm –3
  • Sodium carbonate, 0.5 mol dm –3
  • Sodium sulfate, 0.5 mol dm –3

Health, safety and technical notes

  • Read our standard health and safety guidance.
  • Wear eye protection throughout.
  • Magnesium nitrate, MgNO 3 .6H 2 O(aq), 0.5 mol dm –3 – see CLEAPSS Hazcard HC059b.
  • Calcium nitrate, Ca(NO 3 ) 2 .4H 2 O(aq), 0.5 mol dm –3 – see CLEAPSS Hazcard HC019B and CLEAPSS Recipe Book RB019.
  • Strontium nitrate, Sr(NO 3 ) 2 .4H 2 O(aq), 0.5 mol dm –3 – see CLEAPSS Hazcard HC019B and CLEAPSS Recipe Book RB095.
  • Barium nitrate, Ba(NO 3 ) 2 , 0.2 mol dm –3 – see CLEAPSS Hazcard HC011 and CLEAPSS Recipe Book RB010.
  • Sodium carbonate, Na 2 CO 3 .10H 2 O, 0.5 mol dm –3 – see CLEAPSS Hazcard HC095A and CLEAPSS Recipe Book RB080.
  • Sodium sulfate, Na 2 SO 4 (aq), 0.5 mol dm –3 – see CLEAPSS Hazcard HC098B.
  • Sodium chloride, NaCl(aq), 0.5 mol dm –3 – see CLEAPSS Hazcard HC047b.
  • Lithium bromide, LiBr(aq), 1 mol dm –3
  • Potassium bromide, KBr(aq), 0.2 mol dm –3 – see CLEAPSS Hazcard HC047b.
  • Cover the worksheets with a clear plastic sheet.
  • Put two drops of each of the metal ion solutions in each box of the appropriate row.
  • Add two drops of each of the anion solutions to the appropriate columns.
  • Observe and interpret your observations.

Teaching notes

Observations.

There should be no precipitates in Group 1, indicating that all Group 1 carbonates and sulfates are soluble.

For Group 2, magnesium sulfate is soluble while strontium and barium sulfates are insoluble. Calcium sulfate is particularly interesting because although it is only sparingly soluble its solubility is much higher than is expected from the solubility product. This is due to ion pairing of the calcium and sulfate ions in aqueous solution. No precipitate will be seen.

The concepts of solubility product and ion pairing may be too complex for most pre-16 students.

Students might think that the Group 1 part of this experiment is rather dull. However, they can be told that chemistry experiments that seem to produce no visual results may nevertheless still produce useful information!

Students should also observe that all the precipitates are white not coloured. The accompanying solubility data will be useful.

Solubility data

The table below shows solubility in grams per 100 cm 3 of water at 20 °C (except where indicated with a superscript).

 Carbonate Hydroxide Sulfate Fluoride 
Magnesium  0.0106  0.0009  73.8  0.0076  
Calcium  0.0014  0.185  0.209  0.0016  
Strontium  0.0011  0.41  0.0113  0.012  
Barium  0.002  5.6  0.00022  0.12

Source:  CRC handbook of chemistry and physics , 74th edn., 1993–4.

Sulfate and carbonate solubility - student sheet

Sulfate and carbonate solubility - teacher notes, additional information.

This resource is part of our  Microscale chemistry  collection, which brings together smaller-scale experiments to engage your students and explore key chemical ideas. The resources originally appeared in the book  Microscale chemistry: experiments in miniature , published by the Royal Society of Chemistry in 1998.

© Royal Society of Chemistry

Health and safety checked, 2018

  • 14-16 years
  • 16-18 years
  • Practical experiments
  • Elements and the periodic table
  • Properties of matter

Specification

  • 2.11.8 recall the solubility trends of the sulfates and hydroxides; and
  • 3.19 Recall the general rules which describe the solubility of common types of substances in water: all common sodium, potassium and ammonium salts are soluble; all nitrates are soluble; common chlorides are soluble except those of silver and lead…
  • The relative solubilities of the hydroxides of the elements Mg–Ba in water.
  • The relative solubilities of the sulfates of the elements Mg–Ba in water.
  • 5. know the trends in solubility of the hydroxides and sulfates of Group 2 elements
  • (g) the preparation of crystals of soluble salts, such as copper(II) sulfate, from insoluble bases and carbonates
  • (i) the test used to identify SO₄²⁻ ions
  • (o) the preparation of insoluble salts by precipitation reactions
  • (j) trends in solubility of Group 2 hydroxides and sulfates
  • (r) soluble salt formation and crystallisation, insoluble salt formation by precipitation and simple gravimetric analysis

Related articles

High school students with pens and paper looking at balls of play dough laid out in the shape of an atom

Model atomic structure with playdough

2024-06-19T05:01:00Z By Eleanor Clapp

Engage your learners with these fun, tactile ideas and models

Previews of the Review my learning: representing elements and compounds teacher guidance and scaffolded student sheets

Representing elements and compounds | Review my learning worksheets | 14–16 years

By Lyn Nicholls

Identify learning gaps and misconceptions with this set of worksheets offering three levels of support

Particle model index image

Particle diagrams | Structure strip | 14–16

By Kristy Turner

Support learners to describe and evaluate the particle model for solids, liquids and gases with this writing activity

No comments yet

Only registered users can comment on this article., more experiments.

Image showing a one page from the technician notes, teacher notes, student sheet and integrated instructions that make up this resource, plus two bags of chocolate coins

‘Gold’ coins on a microscale | 14–16 years

By Dorothy Warren and Sandrine Bouchelkia

Practical experiment where learners produce ‘gold’ coins by electroplating a copper coin with zinc, includes follow-up worksheet

potion labels

Practical potions microscale | 11–14 years

By Kirsty Patterson

Observe chemical changes in this microscale experiment with a spooky twist.

An image showing the pages available in the downloads with a water bottle in the shape of a 6 in the foreground.

Antibacterial properties of the halogens | 14–18 years

Use this practical to investigate how solutions of the halogens inhibit the growth of bacteria and which is most effective

  • Contributors
  • Email alerts

Site powered by Webvision Cloud

discussing solubility in the lab report

Q1:     When reporting solubilities, do I report if the compound was insoluble in different solvents, or just the ones that they were soluble in and what the positive result indicates about its identity? 

A1:    Report whether the compound was soluble in d H 2 O, and in the aq. acids and aq bases.  You should indicate what (if anything) the solubility tells you about the functional group or number of C in the compound.  You do not need to mention solubilities in organic solvent UNLESS the compound was (1) a solid carboxylic acid that was not soluable in dH 2 O, so that you needed to use alcohol in the titration or (2) the solid compound was insoluble in chloroform, requiring the IR and NMR spectrum to be run some other solvent.

Q2:    When discussing observations of solubility (miscibility), should I include an equation for a reaction that occurred between the unknown compound and sulfuric acid (formation of clear, yellow liquid)?

A2:     If your compound dissolves in an acid or base, you should show the ionization equation to indicate why it is soluble (typically ionic forms of organic compounds are more soluble than the neutral form) or, if the compound has low molecular weight and is polar, the neutral form may be soluble.

If you think the compound reacted you should show the equation of reaction showing what happens for your specific compound. A typical reaction one might observe is dehydration of an alcohol in the presence of conc. sulfuric acid.  Besides color change, heat is also another item sometimes observed. If the compound does not dissolve or react, mention this, but, obviously there is no equation of reaction.

back to Chem 264 homepage back to general questions about the report

Jump to navigation

Home

Experiment: Temperature and Solubility

Introduction.

The "Temperature and Solubility" experiment aims to investigate how the solubility of a substance is influenced by the temperature of the solvent. This experiment is based on the hypothesis that the solubility of a solute increases with the temperature of the solvent, a concept fundamental to understanding solutions in chemistry.

Materials You Need

  • Sodium chloride (table salt) or sugar (sucrose)
  • Distilled water
  • Three beakers or glass jars
  • Stirring rods
  • Thermometer
  • Heating source (like a hot plate or Bunsen burner)
  • Balance scale
  • Graduated cylinders
  • Label the beakers as 'Cold', 'Room Temperature', and 'Hot'. Measure and pour equal volumes of distilled water into each.
  • Adjust the temperature of the water in each beaker: add ice for 'Cold' to reach about 5°C, leave 'Room Temperature' as is, and heat 'Hot' to approximately 60°C.
  • Weigh out an equal amount of the solute and add it to each beaker.
  • Stir each solution continuously and observe the solute's dissolving rate until saturation.

Observations and Results

Record the time each solute takes to dissolve in the different temperature conditions. Note the amount of undissolved solute at the saturation point for each temperature. Compare the solubility in cold, room temperature, and hot water.

Conclusions

Analyze the data to determine if the hypothesis holds true. The conclusion should discuss whether the solubility of the solute was greater in hot water compared to cold, and what this implies about the relationship between temperature and solubility in solutions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 June 2024

Prediction of CO 2 solubility in Ionic liquids for CO 2 capture using deep learning models

  • Mazhar Ali 1 ,
  • Tooba Sarwar 1 ,
  • Nabisab Mujawar Mubarak 2 , 3 ,
  • Rama Rao Karri 2 , 4 ,
  • Lubna Ghalib 5 ,
  • Aisha Bibi 6 &
  • Shaukat Ali Mazari 1  

Scientific Reports volume  14 , Article number:  14730 ( 2024 ) Cite this article

158 Accesses

Metrics details

  • Energy science and technology
  • Engineering
  • Environmental sciences
  • Materials science
  • Mathematics and computing

Ionic liquids (ILs) are highly effective for capturing carbon dioxide (CO 2 ). The prediction of CO 2 solubility in ILs is crucial for optimizing CO 2 capture processes. This study investigates the use of deep learning models for CO 2 solubility prediction in ILs with a comprehensive dataset of 10,116 CO 2 solubility data in 164 kinds of ILs under different temperature and pressure conditions. Deep neural network models, including Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM), were developed to predict CO 2 solubility in ILs. The ANN and LSTM models demonstrated robust test accuracy in predicting CO 2 solubility, with coefficient of determination (R 2 ) values of 0.986 and 0.985, respectively. Both model's computational efficiency and cost were investigated, and the ANN model achieved reliable accuracy with a significantly lower computational time (approximately 30 times faster) than the LSTM model. A global sensitivity analysis (GSA) was performed to assess the influence of process parameters and associated functional groups on CO 2 solubility. The sensitivity analysis results provided insights into the relative importance of input attributes on output variables (CO 2 solubility) in ILs. The findings highlight the significant potential of deep learning models for streamlining the screening process of ILs for CO 2 capture applications.

Similar content being viewed by others

conclusion of solubility experiment

Investigating the effect of textural properties on CO2 adsorption in porous carbons via deep neural networks using various training algorithms

conclusion of solubility experiment

Solubility of gaseous hydrocarbons in ionic liquids using equations of state and machine learning approaches

conclusion of solubility experiment

Modeling of H2S solubility in ionic liquids: comparison of white-box machine learning, deep learning and ensemble learning approaches

Introduction.

Carbon dioxide (CO 2 ) released into the atmosphere through industrial production has resulted in significant environmental issues, including global climate change 1 . To mitigate the emission and accumulation of CO 2 , the capture and separation of CO 2 from natural and flue gas have emerged as effective approaches 2 . Various technologies have been developed for CO 2 separation, including amine scrubbing 3 , pressure swing adsorption (PSA) 4 , temperature swing adsorption (TSA) 5 , and membrane separation technology 6 . Among these technologies, amine absorption is widely utilized in industries. The commonly employed amine solvents for CO 2 absorption include monoethanolamine (MEA), methyldiethanolamine (MDEA), and diethanolamine (DEA) 1 . However, these absorbents have limitations, such as being prone to volatility and demanding high energy consumption during desorption 7 . Traditional CO 2 capture methods, like amine scrubbing, are hindered by their high energy demands for regeneration and significant solvent loss. This combination not only increases operational costs but also contributes to a larger environmental footprint 8 .

In the past decade, ionic liquids (ILs) have become the most potential applicants for CO 2 capture. The utilization of ILs in carbon capture represents a favourable alternative to conventional amine-based solvents, primarily due to two key advantages: their remarkably low vapour pressure and the ability to tailor their molecular structure to suit specific requirements 9 . These remarkable achievements of ILs are due to their unique molecular structures (anions, cations, and functional groups) and exceptional properties such as thermal stability, nonvolatility, and outstanding CO 2 solubility 10 , 11 , 12 , 13 , 14 . The general properties of the majority of ILs are presented in Table 1 15 .

One major challenge in utilizing ILs for CO 2 capture is their high viscosity because of the complex synthesis and purification processes required to create ILs. Compared to conventional solvents typically used for CO 2 capture, ILs generally exhibit significantly higher viscosity 16 . As highlighted by Krupiczka et al. 17 , the viscosity of ILs can be altered by employing appropriate combinations of cations and anions. Notably, the anion has a greater viscosity influence than the cation. Increasing the alkyl chain length within the cation generally leads to a corresponding increase in IL viscosity 17 . In terms of anion effects on viscosity in imidazolium based ILs, the reported order is [bmim][NTf 2 ] < [bmim][CF 3 SO 3 ] < [bmim][BF 4 ] < [bmim][PF 6 ]. ILs are highly adaptable and can be customized for specific applications by varying the types and ratios of cations and anions. This versatility serves as the basis for designing 18 .

The development of accurate models to predict the solubility of CO 2 in ILs is a critical aspect of the design of ILs for carbon capture using computer-aided molecular design (CAMD). Traditional thermodynamic models have been utilized to estimate gas solubilities, including CO 2 , in ILs. Some of these models include the Peng–Robinson–Stryjek–Vera (PRSV) equation of state 19 , group contribution-based Statistical Associating Fluid Theory (SAFT) 20 , cubic equations of state combined with the UNIFAC (UNIQUAC Functional-group Activity Coefficients) method 21 , and COSMO-RS (Conductor-like Screening Model for Real Solvents) 22 . These models are developed on robust thermodynamic principles and can accurately assess the effects of temperature and pressure. However, their ability to deliver precise quantitative solubility predictions may sometimes be inadequate.

In addition to rigorous thermodynamic modelling, the quantitative structure–property relationship (QSPR) method provides another practical approach for predicting solubility. This method establishes a quantitative correlation between the property of interest and specific structural descriptors of the molecules. Group contribution (GC) methods, which utilize the occurrences of functional groups in the molecule as molecular descriptors, are commonly employed in CAMD. Linear GC models are suitable for specific properties, while nonlinear GC models are required for accurately predicting other properties. Recently, there has been significant advancement and broad adoption of machine learning (ML) models for developing complex nonlinear QSPR or GC models. These models have demonstrated their effectiveness in estimating various properties, including CO 2 solubility 23 , H 2 S solubility 24 , and surface tension 25 . ML models have emerged as a powerful tool for CO 2 capture research. Their ability to learn from data allows them to rapidly predict complex material properties, like CO 2 solubility in ILs 23 . This reduces the time and cost associated with traditional methods and provides valuable insights into the key factors governing CO 2 capture efficiency 26 .

Neural network-based machine learning models have gained significant popularity in predictive analytics, particularly for estimating CO 2 solubility. Eslamimanesh et al. 27 designed an artificial neural network (ANN) model to predict the solubility of CO 2 in 24 commonly used ILs for a dataset consisting of 1128 data points. Venkatraman and Alsberg 28 applied various machine learning algorithms such as Partial-Least-Squares Regression (PLSR), Conditional Inference Trees (CTREE), and Random Forest (RF) to a dataset comprising 10,848 solubility measurements with 185 ILs. Soleimani et al. 29 applied a decision tree-based stochastic gradient boosting (SGB) algorithm to predict H 2 S solubility in ILs using 465 experimental data points. Song et al. 30 have developed ANN-GC and support vector machine (SVM-GC) models for CO 2 solubility prediction in ILs with data containing 10,116 data points (with 124 different ILs). Deng et al. 31 used three deep-learning models to predict CO 2 solubility in ILs. They used a Convolutional Neural Network (CNN), Deep Neural Network (DNN), and a Recurrent Neural Network (RNN) with a relatively small dataset of 218 data points for 13 types of ILs. Recently, Tian et al. 32 utilized ionic fragment contribution (IFC) with ANN and SVM models to predict CO 2 solubility data with 13,055 instances in 164 kinds of ILs. Liu et al. 33 estimated the CO 2 solubility of 1517 data in 20 different ILs using Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), and Sparrow Search Algorithm (SSA) based on SVM ML models. These models achieved higher prediction accuracy depending on the algorithm architecture and the associated dataset. Smaller datasets, which are less complex to train, generally achieve higher accuracy than larger datasets. DNNs excel at handling large datasets due to their ability to learn complex patterns with a higher number of neurons. However, for optimal performance with extensive data, optimization and regularization techniques become crucial to prevent overfitting.

This study aims to develop deep neural network-based models for the larger dataset to predict CO 2 solubility in ILs. An ANN model and a long short-term memory (LSTM)-RNNs architecture are employed to address CO 2 solubility prediction on this extensive dataset, which contains 10,116 CO 2 solubility measurements from the work of Song et al. 30 . In their study, a simple ANN model with one hidden layer (8 neurons) was implemented on this data. However, their work lacks information about validation, hyperparameter tuning, and regularization techniques for such a large dataset of CO 2 solubilities.

This work builds upon the previous study 30 by proposing a DNNs-based ANN model with three hidden layers, each containing 64 neurons. Model validation and hyperparameter tuning were performed to assess the model's performance. The effectiveness of both the ANN and LSTM models was assessed based on computational costs and memory usage during model training. Furthermore, global sensitivity analysis tools, such as Sobol and Morris methods, were used to investigate the impact of various variables (including functional groups) on CO 2 solubility in different ILs. ILs are promising for capturing CO 2 emissions from power plants and industrial processes. By accurately predicting CO 2 solubility, researchers design ILs with optimal CO 2 capture capacity, leading to more efficient CCS technologies.

Dataset/experimental data

This study utilizes CO 2 solubility data originally collected by Venkatraman and Alsberg 28 and meticulously preprocessed and compiled by Song et al. 30 for machine learning model training. The quality of the preprocessed data rendered further modifications unnecessary for our current analysis. This dataset includes 10,116 data points with 53 features that predict CO 2 solubility in ILs. It covers 124 ILs across a temperature range of 243.2 K to 453.15 K and a pressure range of 0.00798 bar to 499 bar. The cations include imidazolium, pyridinium, piperidinium, pyrrolidinium, phosphonium, sulfonium, and ammonium. The anions include tetrafluoroborate [BF 4 ], dicyanamide [DCA], hexafluorophosphate [PF 6 ], chloride [Cl], nitrate [NO 3 ], tricyanomethanide [C(CN) 3 ], thiocyanate [SCN], bis(trifluoromethylsulflonyl)amide [Tf 2 N], hydrogen sulfate [HSO 4 ], and methylsulfate [MeSO 4 ] etc.

This study aims to develop deep-learning models for predicting CO 2 solubility in ILs. Previous research by Song et al. 30 on this dataset has not adequately addressed the optimization and regularization of neural network modelling. Our study fills this gap by focusing on several critical aspects, including model validation, hyperparameter tuning, computational efficiency, and the impact of neuron configurations on model performance. The modelling will use temperature, pressure (considered the most important features in CO 2 capture due to their direct impact on IL performance), and other relevant factors (referred to as input parameters) to predict CO 2 solubility (the output). The dataset of 10,116 data points is divided into training (80%) and testing (20%) sets to develop the deep learning models. This means the training set contains 8093 data points (80%), and the testing set contains 2023 (20%). During the model's training, 10% of the data was set aside for validation to ensure optimal performance. This dedicated validation set enabled monitoring of the model's validation loss curves throughout the training process. By analyzing these curves, potential overfitting could be identified, allowing for necessary adjustments to the model's architecture or training parameters.

Model development

This section delves into the development of two deep learning models—an ANN and an LSTM-RNN network—for predicting CO 2 solubility in ILs.

Artificial neural network (ANN)

ANN is a biologically inspired network of artificial neurons modelled to perform various tasks 34 . These tasks include regression 35 , classification 36 , verification, and recognition. ANN model can recognize nonlinear complex relationships and can be used to predict CO 2 solubility 37 . The literature indicates that various studies have used an ANN model to predict CO 2 solubility in ILs 37 , 38 , 39 , 40 . An ANN model consists of different layers and certain numbers of neurons in each layer. As a feed-forward neural network, ANN consists of three layers—input, hidden, and output. The topology is shown in Fig.  1 .

figure 1

Schematic structure of ANN model 30 .

The input layer receives 53 features consisting of temperature, pressure, and functional groups, which gives an input vector p with a size of (53 \(\times\) 1). The function of the hidden layer is to transfer this input information to the output layers where solubility can be predicted. The output from a hidden layer \({f}_{1}\left({a}_{1}\right)\) is demonstrated in Eqs. ( 1 ) and ( 2 ) define the output of the output layer.

The ANN architecture comprises one input layer, one output layer, and three hidden layers. Each hidden layer is equipped with 64 neurons to optimize the model accuracy (see Supplementary Fig. S1 ). A detailed discussion regarding the adjustment of neurons is presented in " ANN model ". The activation functions are used for the hidden and output layers. The primary role of these transfer functions is to transform the summed weighted input from the node into the output value for the next layer. In other words, the basic principle of function is to decide whether the neuron’s input is necessary in predicting data. Different activation functions are used for the neural network, such as the Sigmoid function, Tanh function (hyperbolic tangent), rectified linear activation function (ReLU), SoftMax, etc. The present study used ReLU for both hidden and output layers. A ReLU is a type of linear function. It is computationally more efficient than sigmoid and tanh functions due to its certain numbers of neuron activation since it doesn’t activate all neurons at the same time 41 . The mathematical expression for the ReLU function is given below.

Long short-term memory (LSTM) model

An LSTM is a special type of RNN architecture. RNN model performs poorly on long-term dependencies due to the vanishing gradient problem 42 . The LSTM is an extension model of RNN that uses memory structures to learn long-term information. These models can efficiently remove gradient problems 43 , 44 . The LSTM model gathers important information from input and saves this information for a long period, which is stored by a memory cell in the LSTM unit. A simple LSTM unit contains a cell, an input gate, a forget gate, and an output gate, as shown in Fig.  2 . The cell remembers the values over arbitrary time intervals. The input gate decides which information should be added to the memory cell, while the forget gate decides whether to remove/save that information. Lastly, the output gate decides whether the existing information should have proceeded for analysis. Each LSTM cell contains six components in each timestep: a forget gate \("f"\) (a neural network with sigmoid function), a candidate layer \("C"\) (a neural network with tanh function), an input gate \("i"\) (a neural network with sigmoid), an output gate \("O"\) (a neural network with sigmoid), a hidden state \("h"\) (a vector) and a memory state \("\widetilde{C}"\) (a vector) as shown in Eqs. ( 4 ) to ( 9 ). The first parameter is the forget gate parameter \(({f}_{t})\) that decides linear calculation based \({x}_{t}\) (current input) and \({h}_{t-1}\) (previous hidden state) values. The value of output for this gate is between 0 and 1, where 0 means the previous memory state is completely forgotten; else, if the value is 1, then it means the previous memory state is completely passed to the cell. The second parameter is the input gate, which contains two layers (a sigmoid layer and a tanh layer). The sigmoid layer decides regarding an update of values, and the tanh creates the addition of a new vector candidate \(({\widetilde{C}}_{t})\) values to the LSTM memory. The output for these values is obtained with the help of Eqs. ( 5 ) and ( 6 ) and with the help of these, the cell state \({(C}_{t})\) is updated with the help of Eq. ( 7 ). Equation ( 8 ) helps to calculate the output parameter \({(o}_{t})\) . The final output result at hidden state \({h}_{t}\) It can be obtained by using Eq. ( 9 )

figure 2

Basic LSTM layer structure 45 .

The LSTM architecture comprises an input layer, two hidden layers of 64 neurons, and an output layer (see Supplementary Fig. S1 ). While RNN architectures exist, this study employs LSTM networks due to their well-established capability to handle sequential data with long-term dependencies. In CO 2 solubility prediction, the relationship between past and present data points can be crucial, especially when considering factors like temperature history or pressure fluctuations. Unlike simpler RNNs that struggle with vanishing gradients, LSTMs incorporate memory cells and gates that effectively capture and utilize these long-term dependencies, leading to potentially more accurate CO 2 solubility predictions.

Sobol sensitivity analysis

Sobol sensitivity analysis, introduced by Sobol 46 is a variance-based method that offers a global perspective. It aims to determine the contribution of each parameter and the interactions among parameters to the variance observed in the model output. Generally, the allocation of the overall output variance to individual model parameters and their interactions are written as

where \(D(f )\) represents the total variance of the output metric \(f;\) \({D}_{i}\) is the first-order variance contribution of the \({i}^{th}\) parameter, \({D}_{ij}\) is the second-order contribution of the interaction between parameters \(i\) and \(j\) ; and \({D}_{12\dots p}\) contains all interactions higher than third-order, up to \(p\) total parameters.

The first-order and total-order sensitivity indices are defined as follows.

First-order index:

Total order index:

The first-order index captures the relative contribution of the parameter \(i\) to the total output variance, excluding any effects or interactions with other parameters. The total order index equals one minus the fraction of the total variance assigned to \({D}_{i}\) , which includes all parameters except \(i\) . By excluding parameter \(i\) from the analysis, the total order index attributes the decrease in variance to that specific parameter 47 . The difference between a parameter’s first and total order indices corresponds to the impact of its interactions with other parameters.

This study analyzes the total order indices to ascertain the relative importance of model parameters regarding sensitivity. Total order indices, obtained through Sobol sensitivity analysis, capture the combined impact of each input parameter on the model output, accounting for both individual effects and interactions with other parameters. This analysis is crucial for identifying the parameters that significantly influence the variation in predicted CO 2 solubility. Alternative sensitivity analysis methods might not provide the same level of detail. For instance, Morris sensitivity analysis, while efficient for initial screening, might not offer the in-depth information about individual and interactive effects that Sobol sensitivity analysis provides through total order indices. To ensure the robustness of our findings, we also employed the Morris method, allowing us to compare and select the most effective approach.

Morris sensitivity analysis

The method of Morris 48 calculates global sensitivity measures by utilizing a set of local derivatives, also known as elementary effects. These effects are sampled on a grid that covers the parameter space. The method is based on a one-at-a-time (OAT) approach, where each parameter, denoted as \({x}_{i}\) , is perturbed along a grid with a step size of \({\Delta }_{i}\) . This perturbation allows for the creation of a trajectory through the parameter space, enabling sensitivity analysis across different parameter values. In a model consisting of \(p\) parameters, a single trajectory comprises a sequence of \(p\) perturbations. Each trajectory provides an estimate of the elementary effect for each parameter, which is determined by the ratio of the change in the model output to the change in the respective parameter. Equation ( 13 ) demonstrates the computation of a single elementary effect for the \({i}^{th}\) parameter.

where \(f (x)\) represents the prior point in the trajectory. In alternative formulations, both the numerator and denominator in the calculation are normalized by the values of the function and parameter \({x}_{i}\) , respectively, at a reference or prior point \(x\) 49 . This normalization ensures that the elementary effect is expressed relative to the function and parameter values at the reference point. Employing the single trajectory presented in Eq. ( 8 ) makes it possible to compute the elementary effects for each parameter with just p  +  1 model evaluations. Nevertheless, since this one-at-a-time (OAT) method relies solely on a single trajectory, its results heavily rely on the initial point \(x\) location within the parameter space and do not account for interactions between parameters. To address this limitation, the Morris 48 extends the OAT method by conducting it across N trajectories throughout the parameter space.

The Morris method relies on the concept of elementary effects. These effects represent the change in the model output (predicted CO 2 solubility) caused by small perturbations to a single input parameter across different points in the parameter space. The Morris method utilizes a grid-based approach to compute elementary effects. It repeatedly samples the parameter space, slightly increasing or decreasing the value of a single parameter at each sample point while keeping all other parameters fixed. The difference between the model outputs obtained with the original and perturbed parameter value is the corresponding elementary effect 48 . The mean effect ( μ ) defines a parameter's average of elementary effects and indicates its overall influence on the model output. A positive value suggests the parameter generally increases CO 2 solubility, while a negative value indicates the opposite.

Statistical indexes as an error function

In this section, the reliability and accuracy of the predicted models were evaluated through statistical analysis. Five key statistical indexes were determined: coefficient of determination (R 2 ), root mean square error (RMSE), mean squared error (MSE), mean absolute error (MAE), and average absolute relative deviation (AARD). These indexes provide a comprehensive assessment of the model's performance and ability to predict CO 2 solubility accurately.

Results and discussion

The performance efficiency in predicting CO 2 solubility was different for different models while considering the same parameters and optimization methods for prediction. A better choice of optimizer was needed to modify the attributes of the neural network models. Ruder 50 in their comprehensive review of modern optimization algorithms and recommended ‘Adam’ as the superior choice among various optimizer techniques; hence, Adam optimizers, coupled with the ReLU activation function, were employed for each model to achieve optimized efficiency.

In neural network modelling, the learning rate is a crucial hyperparameter that influences how the model updates its weights during training. A well-chosen learning rate ensures the model learns effectively, at neither too slow a pace (which can lead to underestimation) nor too quickly (which can cause overestimation). A learning rate of 0.001 was selected within the tested range because further decrease resulted in a significant decline in model performance.

A critical step in neural network design is determining the ideal number of neurons in hidden layers. Having too few neurons can lead to underfitting, where the model fails to capture crucial patterns in the data. Conversely, too many neurons can cause overfitting, where the model memorizes noise instead of learning the underlying relationships. This study began with an architecture containing 8 neurons per hidden layer. We then systematically increased this number to 64 neurons per layer, searching for the optimal balance between underfitting and overfitting (see Fig.  14 ). The ANN model incorporates three hidden layers, each containing 64 neurons. A visual representation of this architecture, generated using the NETRON tool 51 , is provided in Supplementary Fig. S1 of the Supplementary Material.

Model validation was performed to verify the accuracy and fit of the model. The training and validation loss functions served as metrics for evaluating the efficiency of the ANN model. The training loss measured how effectively the model fits the training data, while the validation loss indicated its ability to fit new data. The training loss function served as a metric to gauge how effectively the model learned the patterns within the training data, and the validation loss function assessed the ability to generalize these patterns and fit test data. Ideally, the training loss should decrease as the model learns.

In contrast, the validation loss should remain stable or slightly increase, indicating that the model avoids overfitting the training data. Figure  3 illustrates the training and validation loss curves using the MAE metric. The MAE loss curves depict a significant decrease in training loss (blue line), indicating successful learning of the ANN model for the training data. The validation loss (red line) remains stable, suggesting that the model avoids overfitting and generalizes well to new data.

figure 3

Mean absolute error (MAE) loss curves for the LSTM model showing training and validation performance over epochs.

This study aims to optimize the performance of the ANN model by utilizing the different activation functions and a higher number of neurons compared to the previous study 30 for this dataset. The ANN model exhibited improved performance with an increased number of neurons. The higher number of neurons enables the network to understand complex decision boundaries better and express a broader spectrum of functions, ultimately leading to an improved model capacity 52 , 53 . Table 2 summarizes the performance of the ANN model on the training (8,093 data points) and testing datasets (2,023 data points) using R 2 , MAE, RMSE, MSE, and AARD metrics. The R 2 of 0.986 and MAE of 0.0171 indicate a good fit between the predictions and the experimental values. The ANN model showed a decrease in MSE values as the number of neurons in the hidden layers increased, indicating that a more complex architecture enhanced its learning capability. Figure  4 visualizes the comparison between the actual and predicted CO 2 solubility values for both the training and testing sets. It is evident from Fig.  4 that both the training and testing datasets exhibit a strong relationship with the diagonal line, indicating a good fit with the experimental CO 2 solubility data. However, a few outliers are observed, which may be attributed to measurement variations.

figure 4

Comparison of actual and ANN-predicted CO 2 solubility.

The discrepancy between predicted and experimentally measured values is analyzed to assess model performance. Figure  5 represents the distribution of errors between predicted and experimental solubilities. \({(x}_{c{o}_{2}}^{predicted}-{x}_{c{o}_{2}}^{experimental})\) for the ANN model. Most data points fall within a narrow range of − 0.05 to 0.05, indicating a smooth distribution close to zero. However, a few outliers exhibited higher error values. Figure  6 presents a histogram of the error distribution for the ANN model to provide further insights into the range of predicted errors. It is seen that the error distribution is mainly concentrated around zero, with minimal deviation. This suggests the ANN model accurately predicts CO 2 solubility across various temperatures and pressures in ILs.

figure 5

ANN model errors for predicting CO 2 solubility.

figure 6

Distribution of prediction error of ANN model.

This study aims to implement a Long Short-Term Memory (LSTM) model for predicting CO 2 solubility in ILs. LSTM models have seen limited application in this field. Deng et al. 31 have used a classic Recurrent Neural Network (RNN) model for predicting CO 2 solubility in ILs, employing a dataset of 180 data points. While RNNs are typically used for time series problems to analyze long-term dependencies, their applicability to regression problems has been demonstrated 31 . This study significantly improves over previous work by Song et al. 30 by replacing the SVM model with an LSTM neural network model. This substitution leads to a more accurate prediction of CO 2 solubility in ILs.

The LSTM model was structured with a dual-layer configuration, each containing 64 neurons (Supplementary Fig. S2 ). The widely used "tanh" activation function is the default choice for all hidden layers 34 . Typically, dropout functions are utilized to address overfitting issues in model 54 . Even though dropout functions are implemented to prevent overfitting in models. The LSTM model showed no signs of overfitting and was performing adequately; dropout was not incorporated. The Adam optimizer with a learning rate of 0.001 was used to optimize the training of the LSTM model. For hyperparameter tuning, various batch sizes were tested to train the dataset, and a batch size of 16 and 280 epochs yielded the best results. The training and validation loss curves observe the determination of the number of epochs. Figure  7 demonstrates the MAE loss curves for the training and data validation. The training data reveals a significant drop in MAE errors (blue line), highlighting the robust learning capability of the LSTM model. The MAE loss for the validation data (red dashed line) shows the stability of the model by avoiding overfitting. Table 3 provides evaluation metrics to compare the efficiency of training and testing the LSTM model. The LSTM model achieved an R 2 of 0.985 and an MAE of 0.0175 on the testing data, with differences from the training data by 0.41% and 11.9%, respectively. The predicted CO 2 solubilities are compared with the experimental values. They are presented in Fig.  8 . The data points for the training (black circle) and testing datasets (blue triangle) are evenly distributed around the diagonal line, indicating good agreement between the predicted and experimental CO 2 solubility values.

figure 7

Comparison of experimental and LSTM predicted CO 2 solubility.

Figure  9 depicts the distribution of errors between the predicted and experimental CO 2 solubility values for the LSTM model on both the training and testing datasets. \({(x}_{c{o}_{2}}^{predicted}-{x}_{c{o}_{2}}^{experimental})\) . The LSTM model also demonstrates the favorable error distribution for training and testing data, with errors falling from − 0.1 to 0.1 and exhibiting a consistent distribution centered around zero. This suggests good accuracy in predicting CO 2 solubility; however, it is worth noting that the ANN model achieves a slightly lower error margin. Figure  10 utilizes histograms to provide a more granular visualization of the error distribution for the LSTM model. The histograms reveal minimal deviations from zero, indicating that the model predicts CO 2 solubility accurately.

figure 9

LSTM model errors for predicting CO 2 solubility.

figure 10

Distribution of prediction errors of the LSTM model.

Models comparison

A comprehensive evaluation compares the performance and computational efficiency of ANN and LSTM models for predicting CO 2 solubility in ILs. This evaluation considers accuracy, training time (CPU usage), and memory expenditure during training. The computational cost of neural network models during training is analyzed by comparing their CPU time (seconds) and memory consumption (Mebibytes, MiB).

Figure  11 presents the graphical representation of CPU time and memory usage over the training epochs for the ANN and LSTM models. In terms of CPU time, the ANN model proves to be much more efficient. Each training epoch for the ANN model takes approximately 1 s (Fig.  11 a), whereas the LSTM model requires a significantly longer time, averaging between 20 and 30 s per epoch (Fig.  11 b). The total CPU time of the ANN model (4.03 min) is 31 times faster than that of the LSTM model (126.85 min) during the model's training. In comparing peak memory usage between ANN and LSTM models, the LSTM model consumed the most memory, reaching a peak of 733.93 MiBs at the end of the training, followed by the ANN model, which peaked at 535.98 MiBs. LSTMs incorporate memory cells that store past information, resulting in a larger memory footprint compared to the more straightforward layer-based structure of ANNs. LSTMs are inherently more complex architectures, including memory cells and gates (input, output, and forget) to control information flow and contribute to a higher computational load during training.

figure 11

CPU time and memory usage during model training: ( a ) ANN model ( b ) LSTM model.

Table 4 summarizes the statistical comparison of ANN and LSTM models regarding model performance and error ranges. The ANN model performed slightly better than the LSTM model in terms of prediction accuracy. The R 2 values of testing data in the ANN and LSTM models are 0.986 and 0.985, respectively. The MAE of the ANN model is 2.3% lower than the LSTM. Although both models have demonstrated excellent performance, the ANN model outperforms the LSTM model regarding computational cost and efficiency.

Regarding the AARD values, it is worth noting that the LSTM model (10%) exhibits less deviation than the ANN model (28%). Initially, the ANN model recorded an AARD value of 57.5%, which was later reduced to 28.05% by increasing the number of hidden layers from 1 to 3 and adjusting the neuron count. The higher AARD percentage can be attributed to using a large dataset with diverse input parameters.

Song et al. 30 developed an ANN-GC model using the current dataset to predict the CO 2 solubility in ILs. Figure  12 compares evaluation metrics between the current ANN and LSTM models and the ANN-GC model from the previous study 30 . The LSTM and ANN models slightly outperformed the ANN-GC model in terms of prediction accuracy. Specifically, the prediction accuracy of the current ANN model increased by 0.2%, accompanied by a 13% reduction in MAE compared to the ANN-GC model 30 . Table 5 compares the methodology used for ANN modelling in this study with the previous study 30 . This study adopts the ReLU activation function due to its computational efficiency and effectiveness with large datasets. It performs better in capturing non-linear patterns and gradients, making it well-suited for a wide range of problems. It is worth noticing that Song et al. 30 achieved a significantly higher accuracy with 7 neurons in the hidden layer compared to this study, which utilized 64 neurons in each hidden layer. The optimal number of neurons is the most crucial step in designing neural networks, especially for a large dataset. Using a higher number of neurons and more hidden layers is generally preferred for larger datasets. This approach allows the neural network to learn and model big data's complex patterns and relationships more effectively. This study lacks information about the hyperparameter tuning and optimization processes. This study investigated the training and testing accuracy by adjusting the learning rate (using the Adam optimizer) and the batch size for model training. Figure  13 visualizes the effect of varying the number of neurons and hidden layers on ANN model accuracy. Optimal results were achieved by configuring 3 hidden layers and 64 neurons in each layer. Optimization aims to minimize the discrepancies between the predicted and actual outputs. As observed in Fig.  13 , adjusting the number of hidden layers and neurons significantly reduced prediction errors.

figure 12

Performance comparison of the ANN and LSTM models with the ANN-GC model developed by Song et al. 30 .

figure 13

ANN model accuracy with different numbers of neurons and hidden layers.

A study by Deng et al. 31 employed an ANN model and achieved a high R 2 of 0.999. However, their model was trained on a relatively small dataset of 218 data points for only 13 types of ILs. This limited data size might contribute to high accuracy, as smaller datasets can sometimes lead to overfitting. Additionally, their ANN architecture utilized a 7-layer network with many neurons ranging from 500 to 1, decreasing to 1 in the final layer. While this complex architecture may have performed well on their specific dataset, their study did not explicitly evaluate the impact of the specific number of neurons on model performance.

In addition to the DL models, traditional ML regression techniques, namely Random Forest (RFR) and Gradient Boosting Regression (GBR), were employed on this comprehensive dataset. Both RFR and GBR achieved R 2 values of 0.974 and 0.966, respectively. A detailed visualization of the predicted values and their associated error ranges for both models is presented in Supplementary Fig. S3 of the supplementary materials. A review of multiple literature sources was conducted to achieve a comprehensive overview of the prediction accuracy of models concerning statistical parameters, the number of data points, and the variety of ILs utilized for predicting CO 2 solubility. Table 6 compares the performance of various machine learning and thermodynamics-based models for CO 2 solubility prediction in ILs. Interestingly, it is observed that models with higher R 2 values and lower AARD values are often associated with smaller data points and a lower number of ILs in their respective studies. Despite the challenges associated with larger datasets, studies conducted by Venkatraman and Alsberg 28 demonstrate promising results with a higher number of ILs and data points. Their RF and CTREE models achieved R 2 values of 0.92 and 0.82, respectively. Song et al. 30 reported the most extensive dataset for ILs. In their work, authors developed ANN-GC and SVM-GC models, yielding reliable R 2 values of 0.9836 and 0.9783, respectively. Among the literature studies surveyed, Mesbah et al. 55 introduced the MLP-ANN model, which achieved the highest R 2 value of 0.9987 and the lowest AARD value of 1.8416. This model was evaluated using a dataset comprising 20 ionic liquids (ILs) and 1386 data points.

Global sensitivity analysis (GSA)

CO 2 solubility in ILs is strongly influenced by input parameters such as temperature, pressure, and the presence of functional groups. Blanchard et al. 61 demonstrated efficient CO 2 dissolution in ILs at 25 °C and pressures up to 40 MPa. Extensive research has explored CO 2 absorption with ILs, encompassing both conventional ILs relying on physisorption and functionalized ILs utilizing chemisorption mechanisms 14 . Generally, for conventional ILs, the anions are more effective for CO 2 absorption, while cations have relatively low effects.

The solubility of CO 2 in ILs has been investigated through Global Sensitivity Analysis (GSA) to assess the relative impacts of process parameters, including temperature, pressure, and various functional groups. This analysis aims to ascertain the significance of these factors on the solubility behaviour of CO 2 in ILs. GSA is a robust approach that evaluates the influence of input parameters on outputs by allowing all inputs to fluctuate within predefined ranges 47 , providing valuable insights into the consequences of input variations on the overall system behaviour.

For GSA, two widely used techniques, Sobol sensitivity analysis 46 and Morris sensitivity analysis 48 , were applied to analyze the effect of input variables on CO 2 solubility in ILs. In the Sobol method, the total sensitivity index ( S T ) is utilized to assess the overall impact of an input variable on CO 2 solubility. The S T quantifies an input variable's total effect on the model output. On the other hand, the Morris method employs the μ index, which represents the average effect of each input variable over the sampled parameter space. It quantifies the average change in the model output when a variable is perturbed while holding other variables constant. Higher μ values indicate a more significant influence of the variable on the model output.

Figure  14 presents the results of both Sobol and Morris global sensitivity analysis for temperature ( T ), pressure ( P ), and the functional groups. While both methods provide valuable insights, they may present slightly different perspectives. Pressure emerges as a dominant factor affecting CO 2 solubility. This is evident in Fig.  14 a, where both methods indicate a significant sensitivity index for pressure. This means changes in pressure have a strong impact on predicted CO 2 solubility values. The temperature ( T ) index is positive for the Sobol analysis and negative for the Morris analysis. The Sobol sensitivity analysis unexpectedly assigns a positive value to the T index. This finding seemingly contradicts the established knowledge that temperature has a negative impact on CO 2 solubility (i.e., higher temperature leads to lower CO 2 solubility). The Sobol method is sensitive to non-linear relationships between input parameters and the output. The true relationship between temperature and CO2 solubility may be non-linear within the range of your data. The positive Sobol index might capture an initial increase in CO 2 solubility followed by a decrease at higher temperatures, which a simple negative index would not reflect. Jerng et al. have indicated that the CO 2 solubility decreases with increasing temperature 62 . The Morris method suggests a negative correlation between temperature and CO 2 solubility, aligning with the observation that CO 2 solubility increases as temperature decreases. Figure  14 b,c display the sensitivity indices for various functional groups. The graphs indicate that some functional groups have a minimal influence on CO 2 solubility, whereas others demonstrate a negative impact. Supplementary Table S1 (Supplementary Material) presents the sensitivity index values for each parameter across the dataset, as determined by the Sobol and Morris sensitivity analysis methods.

figure 14

Sobol and Morris sensitivity indices of temperature and pressure with 53 functional groups.

When dealing with extensive datasets that include numerous input variables, the Morris method could be a preferable initial option over the Sobol sensitivity analysis. Due to its faster execution and lower computational demand, the Morris method is particularly beneficial for large-scale data processing, enabling rapid analysis with modest resource consumption. The Morris method serves as a valuable tool for initial screening. It can efficiently identify the most influential parameters (such as pressure and temperature) while filtering out those with a lower impact (certain functional groups).

This study offers a valuable combination of high accuracy, efficiency, and insights into model interpretability using deep learning models. Still, these models' ability to generalize to other solutes or liquid types remains unverified. Another limitation to consider is the computational cost of the LSTM model. Although both models achieve high accuracy, the LSTM model requires significantly more training time and memory resources than the ANN model. This could limit its applicability in real-world scenarios where computational power or hardware resources might be restricted.

Conclusions

This study investigated the potential of deep learning models for predicting CO 2 solubility in ionic liquids (ILs). A comprehensive dataset containing over 10,116 CO 2 solubility measurements covering 164 different ILs under varying temperatures and pressures was used to train two deep neural network models: an Artificial Neural Network (ANN) and a Long Short-Term Memory (LSTM) network. The hyperparameter tuning, optimization, and validation strategy were conducted to evaluate the model performance comprehensively. The efficiency of the ANN and LSTM models was compared by analyzing their computational demands and memory consumption throughout the training process. Both models demonstrated remarkable accuracy in predicting CO 2 solubility. The ANN model achieved a high R 2 of 0.985 in just 4 min of training, consuming 535 MiB of memory. The LSTM model required significantly more training time (approximately 126 min) and consumed more memory (735 MiB) to achieve a comparable R 2 of 0.984. This difference can be attributed to the LSTM architecture's inherent complexity in handling sequential data. The ANN model achieved a 13% lower error rate than a previous study that used an ANN-GC model on a similar dataset. In this study, the size of neurons is optimized within the ANN model to achieve this higher accuracy and lower error rate. A review of existing literature on the prediction models developed for CO 2 capture in ILs was conducted to gain insights into the relationship between model performance and characteristics of ILs.

Sobol and Morris, sensitivity analysis methods, were employed to investigate the relative importance of input parameters on CO 2 solubility in ILs. The Morris sensitivity analysis identified pressure and temperature as having the most significant influence on CO 2 solubility in ILs, aligning well with experimental observations. The Morris method is a computationally efficient and easy-to-interpret technique for initial sensitivity analysis, particularly suitable for large datasets. The sensitivity analysis results provided valuable insights into the model's sensitivity to different parameters and helped identify the key factors driving the CO 2 solubility.

This study offers significant advancements in predicting CO 2 solubility in ILs using deep learning models. The high accuracy and efficiency of the ANN model make it a promising tool for streamlining the screening process of ILs for CO 2 capture applications. This paves the way for further exploration of deep learning approaches for similar prediction tasks in CO 2 capture research and potentially extends its application to other areas of material science.

Data availability

The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Activations (or the total inputs) for the neurons in the layers

Bias vectors for the layers

A memory state (a vector in an LSTM cell)

Cell state in an LSTM cell

Total variance in sensitivity analysis

First-order variance contribution of parameter i

Second-order variance contribution from the interaction between parameters i and j

Third-order variance contribution from the interaction between parameters i, j, and k

Elementary effect for the parameter i

Activation functions for the first layer

Activation functions for the second layer

Forget the gate in an LSTM cell

Hidden state of the LSTM cell

Input gate in an LSTM cell

Output gate in an LSTM cell

First-order sensitivity index for parameter i

Total-order sensitivity index for parameter i

Pressure (Pa)

Input vector to the neural network

Coefficient of determination [–]

Temperature (K)

Weight matrices

Hyperbolic tangent activation function

Sigmoid activation function

Average of elementary effects for a parameter in Morris sensitivity analysis

Artificial neural network

Adaptive neuro-fuzzy inference system

Average absolute relative deviation (AARD)

Back Propagation (commonly used in neural networks)

Carbon dioxide

Convolutional neural network

Conditional inference trees

Conductor-like screening model for real solvents

Computer-aided molecular design

Cascade forward neural network

Cuckoo search algorithm least squares support vector machine

Central processing unit

Decision tree

Deep neural network

Diethanolamine

Extreme learning machine

Group contribution

Group method of data handling

Grey Wolf Optimization

General regression neural network

Gradient boost regressor

Hydrogen sulfide

  • Ionic liquids

Ionic fragments contribution

Least squares support vector machine

Mean absolute error

Mean squared error

Multilayer perceptron neural network

Multilayer perceptron artificial neural network

Multiple linear regression

Monoethanolamine

Methyldiethanolamine

Pressure swing absorption

Peng–Robinson–Stryjek–Vera

Peng–Robinson equation of state

Partial-least-squares regression

Particle swarm optimization

Quantitative structure–property relationship

Rectified linear activation function

Recurrent neural network

Radial basis function (network or kernel)

Random forest

Root mean square error

Soave–Redlich–Kwong equation of state

Support vector machine

Statistical associating fluid theory

Sparrow search algorithm

Temperature swing adsorption

EXtreme gradient boosting

Tetrafluoroborate

Dicyanamide

Hexafluorophosphate

Tricyanomethanide

Bis(trifluoromethylsulflonyl)amide

Hydrogen sulfate

Methylsulfate

Zheng, S. et al. State of the art of ionic liquid-modified adsorbents for CO 2 capture and separation. AIChE J. 68 , e17500 (2022).

Article   CAS   Google Scholar  

Arellano, I. H., Madani, S. H., Huang, J. & Pendleton, P. Carbon dioxide adsorption by zinc-functionalized ionic liquid impregnated into bio-templated mesoporous silica beads. Chem. Eng. J. 283 , 692–702 (2016).

Krótki, A. et al. Experimental results of advanced technological modifications for a CO 2 capture process using amine scrubbing. Int. J. Greenh. Gas Control 96 , 103014 (2020).

Article   Google Scholar  

Zhou, Y. et al. Tetra-n-heptyl ammonium tetrafluoroborate: Synthesis, phase equilibrium with CO 2 and pressure swing absorption for carbon capture. J. Supercrit. Fluids 120 , 304–309 (2017).

Jiang, L. et al. Comparative analysis on temperature swing adsorption cycle for carbon capture by using internal heat/mass recovery. Appl. Therm. Eng. 169 , 114973 (2020).

Guo, M. et al. Amino-decorated organosilica membranes for highly permeable CO 2 capture. J. Membr. Sci. 611 , 118328 (2020).

Polesso, B. B. et al. Supported ionic liquids as highly efficient and low-cost material for CO 2 /CH 4 separation process. Heliyon 5 , e02183 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Lian, S. et al. Recent advances in ionic liquids-based hybrid processes for CO 2 capture and utilization. J. Environ. Sci. 99 , 281–295. https://doi.org/10.1016/j.jes.2020.06.034 (2021).

Davarpanah, E., Hernández, S., Latini, G., Pirri, C. F. & Bocchini, S. Enhanced CO 2 absorption in organic solutions of biobased ionic liquids. Adv. Sustain. Syst. 4 , 1900067 (2020).

Zhang, X. et al. Carbon capture with ionic liquids: Overview and progress. Energy Environ. Sci. 5 , 6668–6681 (2012).

Babamohammadi, S., Shamiri, A. & Aroua, M. K. A review of CO 2 capture by absorption in ionic liquid-based solvents. Rev. Chem. Eng. 31 , 383–412 (2015).

Kenarsari, S. D. et al. Review of recent advances in carbon dioxide separation and capture. RSC Adv. 3 , 22739–22773 (2013).

Article   ADS   CAS   Google Scholar  

Theo, W. L., Lim, J. S., Hashim, H., Mustaffa, A. A. & Ho, W. S. Review of pre-combustion capture and ionic liquid in carbon capture and storage. Appl. Energy 183 , 1633–1663 (2016).

Zeng, S. et al. Ionic-liquid-based CO 2 capture systems: Structure, interaction and process. Chem. Rev. 117 , 9625–9673 (2017).

Article   CAS   PubMed   Google Scholar  

Johnson, K. E. What’s an ionic liquid?. Electrochem. Soc. Interface 16 , 38 (2007).

Ramdin, M., de Loos, T. W. & Vlugt, T. J. State-of-the-art of CO 2 capture with ionic liquids. Ind. Eng. Chem. Res. 51 , 8149–8177 (2012).

Krupiczka, R., Rotkegel, A. & Ziobrowski, Z. Comparative study of CO 2 absorption in packed column using imidazolium based ionic liquids and MEA solution. Sep. Purif. Technol. 149 , 228–236 (2015).

Weis, D. C. & MacFarlane, D. R. Computer-aided molecular design of ionic liquids: An overview. Aust. J. Chem. 65 , 1478–1486 (2012).

Holderbaum, T. & Gmehling, J. PSRK: A group contribution equation of state based on UNIFAC. Fluid Phase Equilib. 70 , 251–265 (1991).

Mourah, M., NguyenHuynh, D., Passarello, J., De Hemptinne, J. & Tobaly, P. Modelling LLE and VLE of methanol+ n-alkane series using GC-PC-SAFT with a group contribution kij. Fluid Phase Equilib. 298 , 154–168 (2010).

Fredenslund, A., Jones, R. L. & Prausnitz, J. M. Group-contribution estimation of activity coefficients in nonideal liquid mixtures. AIChE J. 21 , 1086–1099 (1975).

Eckert, F. & Klamt, A. Fast solvent screening via quantum chemistry: COSMO-RS approach. AIChE J. 48 , 369–385 (2002).

Tatar, A. et al. Prediction of carbon dioxide solubility in ionic liquids using MLP and radial basis function (RBF) neural networks. J. Taiwan Inst. Chem. Eng. 60 , 151–164 (2016).

Faúndez, C. A., Fierro, E. N. & Valderrama, J. O. Solubility of hydrogen sulfide in ionic liquids for gas removal processes using artificial neural networks. J. Environ. Chem. Eng. 4 , 211–218 (2016).

Mulero, Á., Cachadiña, I. & Valderrama, J. O. Artificial neural network for the correlation and prediction of surface tension of refrigerants. Fluid Phase Equilib. 451 , 60–67 (2017).

Sun, J., Sato, Y., Sakai, Y. & Kansha, Y. A review of ionic liquids design and deep eutectic solvents for CO 2 capture with machine learning. J. Clean. Prod. 414 , 137695 (2023).

Eslamimanesh, A., Gharagheizi, F., Mohammadi, A. H. & Richon, D. Artificial neural network modelling of solubility of supercritical carbon dioxide in 24 commonly used ionic liquids. Chem. Eng. Sci. 66 , 3039–3044 (2011).

Venkatraman, V. & Alsberg, B. K. Predicting CO 2 capture of ionic liquids using machine learning. J. CO2 Util. 21 , 162–168 (2017).

Soleimani, R., Saeedi Dehaghani, A. H. & Bahadori, A. A new decision tree based algorithm for prediction of hydrogen sulfide solubility in various ionic liquids. J. Mol. Liq. 242 , 701–713. https://doi.org/10.1016/j.molliq.2017.07.075 (2017).

Song, Z., Shi, H., Zhang, X. & Zhou, T. Prediction of CO 2 solubility in ionic liquids using machine learning methods. Chem. Eng. Sci. 223 , 115752–115752. https://doi.org/10.1016/j.ces.2020.115752 (2020).

Deng, T., Liu, F.-H. & Jia, G.-Z. Prediction carbon dioxide solubility in ionic liquids based on deep learning. Mol. Phys. 118 , e1652367–e1652367 (2020).

Article   ADS   Google Scholar  

Tian, Y., Wang, X., Liu, Y. & Hu, W. Prediction of CO 2 and N 2 solubility in ionic liquids using a combination of ionic fragments contribution and machine learning methods. J. Mol. Liq. 383 , 122066 (2023).

Liu, Z., Bian, X.-Q., Duan, S., Wang, L. & Fahim, R. I. Estimating CO 2 solubility in ionic liquids by using machine learning methods. J. Mol. Liq. 391 , 123308 (2023).

Samra, M. N. A., Abed, B. E. E.-D., Zaqout, H. A. N. & Abu-Naser, S. S. ANN model for predicting protein localization sites in cells. Int. J. Acad. Appl. Res. IJAAR. 4 (2020).

Mirarab, M., Sharifi, M., Behzadi, B. & Ghayyem, M. A. Intelligent prediction of CO 2 capture in propyl amine methyl imidazole alanine ionic liquid: An artificial neural network model. Sep. Sci. Technol. 50 , 26–37 (2015).

Zhou, G.-S. et al. Hydrophilic interaction chromatography combined with ultrasound-assisted ionic liquid dispersive liquid–liquid microextraction for determination of underivatized neurotransmitters in dementia patients’ urine samples. Anal. Chim. Acta 1107 , 74–84 (2020).

Baghban, A., Ahmadi, M. A. & Shahraki, B. H. Prediction carbon dioxide solubility in presence of various ionic liquids using computational intelligence approaches. J. Supercrit. Fluids 98 , 50–64. https://doi.org/10.1016/J.SUPFLU.2015.01.002 (2015).

Baghban, A., Mohammadi, A. H. & Taleghani, M. S. Rigorous modelling of CO 2 equilibrium absorption in ionic liquids. Int. J. Greenh. Gas Control. 58 , 19–41 (2017).

Zhang, X., Wang, J., Song, Z. & Zhou, T. Data-driven ionic liquid design for CO 2 capture: Molecular structure optimization and DFT verification. Ind. Eng. Chem. Res. 60 , 9992–10000 (2021).

Daryayehsalameh, B., Nabavi, M. & Vaferi, B. Modelling of CO 2 capture ability of [Bmim][BF4] ionic liquid using connectionist smart paradigms. Environ. Technol. Innov. 22 , 101484–101484. https://doi.org/10.1016/j.eti.2021.101484 (2021).

Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (Adaptive Computation and Machine Learning Series) , 321–359 (Cambridge Massachusetts, 2017).

Hochreiter, S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 6 , 107–116 (1998).

Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9 , 1735–1780 (1997).

Siami-Namini, S., Tavakoli, N. & Namin, A. S. In 2019 IEEE International Conference on Big Data (Big Data). 3285–3292 (IEEE).

Xiang, Z., Yan, J. & Demir, I. A rainfall-runoff model with LSTM-based sequence-to-sequence learning. Water Resour. Res. 56 , e2019WR025326-e022019WR025326 (2020).

Sobol, I. M. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Simul. 55 , 271–280 (2001).

Article   MathSciNet   Google Scholar  

Homma, T. & Saltelli, A. Importance measures in global sensitivity analysis of nonlinear models. Reliab. Eng. Syst. Saf. 52 , 1–17 (1996).

Morris, M. D. Factorial sampling plans for preliminary computational experiments. Technometrics 33 , 161–174 (1991).

van Griensven, A. V. et al. A global sensitivity analysis tool for the parameters of multi-variable catchment models. J. Hydrol. 324 , 10–23 (2006).

Ruder, S. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016).

Roeder, L. Netron: Visualizer for Neural Network, Deep Learning and Machine Learning Models . https://www.lutzroeder.com/ai

Abhishek, K., Singh, M., Ghosh, S. & Anand, A. Weather forecasting model using artificial neural network. Proc. Technol. 4 , 311–318 (2012).

Krogh, A. What are artificial neural networks?. Nat. Biotechnol. 26 , 195–197 (2008).

Gal, Y. & Ghahramani, Z. A theoretically grounded application of dropout in recurrent neural networks. Adv. Neural Inf. Process. Syst. 29 (2016).

Mesbah, M., Shahsavari, S., Soroush, E., Rahaei, N. & Rezakazemi, M. Accurate prediction of miscibility of CO 2 and supercritical CO 2 in ionic liquids using machine learning. J. CO 2 Util. 25 , 99–107. https://doi.org/10.1016/j.jcou.2018.03.004 (2018).

Aghaie, M. & Zendehboudi, S. Estimation of CO 2 solubility in ionic liquids using connectionist tools based on thermodynamic and structural characteristics. Fuel 279 , 117984–117984. https://doi.org/10.1016/j.fuel.2020.117984 (2020).

Ghaslani, D., Gorji, Z. E., Gorji, A. E. & Riahi, S. Descriptive and predictive models for Henry’s law constant of CO 2 in ionic liquids: A QSPR study. Chem. Eng. Res. Des. 120 , 15–25 (2017).

Dashti, A., Riasat Harami, H., Rezakazemi, M. & Shirazian, S. Estimating CH 4 and CO 2 solubilities in ionic liquids using computational intelligence approaches. J. Mol. Liq. 271 , 661–669. https://doi.org/10.1016/j.molliq.2018.08.150 (2018).

Xia, L., Wang, J., Liu, S., Li, Z. & Pan, H. Prediction of CO 2 solubility in ionic liquids based on multi-model fusion method. Processes 7 , 258–258 (2019).

Moosanezhad-Kermani, H., Rezaei, F., Hemmati-Sarapardeh, A., Band, S. S. & Mosavi, A. Modelling of carbon dioxide solubility in ionic liquids based on group method of data handling. Eng. Appl. Comput. Fluid Mech. 15 , 23–42 (2021).

Google Scholar  

Blanchard, L. A., Hancu, D., Beckman, E. J. & Brennecke, J. F. Green processing using ionic liquids and CO 2 . Nature 399 , 28–29 (1999).

Jerng, S. E., Park, Y. J. & Li, J. Machine learning for CO 2 capture and conversion: A review. Energy AI 16 , 100361. https://doi.org/10.1016/j.egyai.2024.100361 (2024).

Download references

Author information

Authors and affiliations.

Department of Chemical Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan

Mazhar Ali, Tooba Sarwar & Shaukat Ali Mazari

Petroleum and Chemical Engineering, Faculty of Engineering, Universiti Teknologi Brunei, Bandar Seri Begawan, BE1410, Brunei Darussalam

Nabisab Mujawar Mubarak & Rama Rao Karri

Department of Chemistry, School of Chemical Engineering and Physical Sciences, Lovely Professional University, Phagwara, Punjab, 144411, India

Nabisab Mujawar Mubarak

INTI International University, 71800, Nilai, Negeri Sembilan, Malaysia

Rama Rao Karri

Materials Engineering Department, Mustansiriayah University, Baghdad, 14022, Iraq

Lubna Ghalib

Department of Education, NUML, Islamabad, Pakistan

You can also search for this author in PubMed   Google Scholar

Contributions

M.A.: Conceptualization, methodology, software, validation, writing—original draft preparation, writing—review and editing. T.S.: Methodology, software, data curation, writing—original draft preparation, visualization, writing—review and editing. N.M.M.: Conceptualization, validation, and writing—review and editing. R.R.K.: Conceptualization, validation, and writing—review and editing. S.A.M.: Conceptualization, methodology, validation, formal analysis, writing-original draft preparation, and writing—review and editing. L.G.: Writing—review and editing. A.B.: Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Corresponding authors

Correspondence to Nabisab Mujawar Mubarak , Rama Rao Karri or Shaukat Ali Mazari .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ali, M., Sarwar, T., Mubarak, N.M. et al. Prediction of CO 2 solubility in Ionic liquids for CO 2 capture using deep learning models. Sci Rep 14 , 14730 (2024). https://doi.org/10.1038/s41598-024-65499-y

Download citation

Received : 25 January 2024

Accepted : 20 June 2024

Published : 26 June 2024

DOI : https://doi.org/10.1038/s41598-024-65499-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • CO 2 capture
  • Deep learning
  • Global sensitivity analysis

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

conclusion of solubility experiment

IMAGES

  1. SOLUBILITY LAB REPORT

    conclusion of solubility experiment

  2. Sugar in Water, Solubility Science Water Experiments, Solubility

    conclusion of solubility experiment

  3. CHEMINORGANIC

    conclusion of solubility experiment

  4. Solubility Experiment

    conclusion of solubility experiment

  5. Conclusion The experiment tested the theory of solubility and used the

    conclusion of solubility experiment

  6. grade 7 science solubility experiment

    conclusion of solubility experiment

VIDEO

  1. Density and solubility I Science experiment

  2. Common-Ion Effect & Final Equilibrium Topics

  3. Introduction to Solubility Equilibria

  4. Chemistry Solubility Experiment

  5. Solubility experiment by Manvitha 🧪🥼

  6. Solubility of substances #experiment #science #chemistry #class #knowledge #learning #school

COMMENTS

  1. 2.2: Solubility Lab

    In each measurement you mix known masses of the salt and the water. Problem 1: Figure 2.2.2 2.2. 2 (a) is reading the temperature of a saturated solution, the problem is we do not know the concentration of the salt that dissolved. That is, the total salt added is the mass of the salt dissolved and the mass of the precipitate.

  2. Discussion: Solubility Experiment

    Post your results to compare the solubility of the materials you decided to use. If you can, make a table to report your data. Include any observations you made throughout the experiment. Make some conclusions based on your knowledge of solubility and the materials used - explain why you think your results turned out the way they did.

  3. Solubility Experiment

    The answer: solubility. Solubility is the ability of a solid, liquid, or gaseous chemical substance (or solute) to dissolve in a solvent (usually a liquid) and form a homogenous solution. There are three factors that affect solubility: Solvent: To determine whether a solute will dissolve in a solvent, remember this saying: "Like dissolves ...

  4. 1.8: Experiment 6

    By the end of this lab, students should be able to: Explore the relationship between polarity and solubility. Determine the solubility of polar and nonpolar solutes, and an ionic solute in different solvents. Prior knowledge: 8.7: Bond Polarity and Electronegativity. 8.8: Bond and Molecular Polarity.

  5. Experiment 2: Solubility Equilibrium

    the relation between the solubility and the solubility product is given by \[K_{s p}=n^{n} m^{m} S^{n+m}\nonumber\] The formation of coordination complexes can have a large effect on the solubility of a compound in water.

  6. 2.3: Solubility Lab Report

    Checklist for submitted report (Google Workbook): There are three tabs in the template. Coverpage Tab. Fill out all fields. Paste a copy of the graphs on the Graph tab into the blue frame. Data Tab. Transfer data from data sheet in lab to the orange cells. Give results of all calculations in blue cells.

  7. The effect of temperature on solubility

    Procedure. Set up a hot water bath and an ice bath. Put 2.6 g of ammonium chloride into the boiling tube. Add 4 cm 3 water. Warm the boiling tube in the hot water bath until the solid dissolves. Put the boiling tube in the ice bath and stir with the thermometer. Use wooden tongs to hold it if necessary. Note the temperature at which crystals ...

  8. Saturated Solutions: Measuring Solubility

    Save your saturated solutions for the second method. Determining Solubility: Method 2. Label the underside of each saucer with tape, one for each solution. Weigh the empty saucer and record the weight. Pour in 10-15 mL of the appropriate saturated solution (corresponding to the label on the saucer).

  9. Solutions and Solubility

    In this experiment, you will perform experiments designed to understand solutions and factors which affect solubility. Be certain to record all data in your laboratory notebook. Keep good notes, and draw conclusions at the end of the experiment. The subject of solutions is a complex one. It involves such phenomena as the process of dissolving ...

  10. PDF Experiment 4 Solubility of a Salt

    1. Construct the apparatus shown in Figure 1. Use a 50- or 100-mL beaker to make the hot water bath for the heating portion of the experiment. Note that initially, heating and stirring are accomplished using the hot/stir plate. As the solution is cooled, a second stir plate is used to stir the mixture.

  11. PDF Chem 110 Exp 9 Solutions Part 1 2015

    Chemistry 110. PURPOSE: The purpose of this experiment is to determine what type of substances will dissolve in other types of substances and to determine the properties of solutions. Introduction: Much of chemistry occurs in solutions. For example, many of the chemical reactions that occur in the body, occur in the blood or in cells.

  12. PDF Experiment: Solubility Trends

    conclusions about how polarity differences between the solvent and the solute affect solubility. The second part involves testing the solubility of an organic acid, organic base and an organic salt in water and in hexane, again with the goal of determining how polarity affects solubility. Pre-lab Preparation

  13. Saturated Solutions: Measuring Solubility

    Save your saturated solutions for the second method. Method 2: Label the underside of each saucer with tape, one for each solution. Weigh the empty saucer and record the weight. Pour in some of the saturated solution (corresponding to the label), weigh the saucer + solution and record the weight. Do this for each of the three solutions.

  14. PDF Solubility, Solutions and Rates of Chemical Reactions Experiment #7

    FEFF. Solubility, Solutions and Rates of Chemical Reactions Experiment #7. Objectives: To study the surface tension and polarity of water as a solvent and compare it to other solvents such as hexane and ethanol. The dependence of the rate of a chemical reaction will be studied with respect to concentration of one of the reactants and ...

  15. PDF At Home Solubility Lab

    Design and perform an experiment that explores at least two of the three factors that affect solubility to determine if salt or sugar will dissolve more quickly in water. You will need to write a lab procedure, list materials used, collect data in tables and graphs, and analyze that data to draw a conclusion about your hypothesis. (See next page)

  16. Solubility and Factors Affecting Solubility

    Henry's law dictates that when temperature is constant, the solubility of the gas corresponds to it's partial pressure. Consider the following formula of Henry's law: p = kh c (3) (3) p = k h c. where: p p is the partial pressure of the gas above the liquid, kh k h is Henry's law constant, and. c c is the concentrate of the gas in the liquid.

  17. PDF Experiment # 10: Solubility Product Determination

    In this experiment, the relative solubility (and an approximate value of the Ksp) of lead iodide will be determined by direct observation. The procedure calls for the mixing of two standard solutions (one of a soluble lead salt, and a second of a soluble iodide salt) in different proportions and allowing time for

  18. Experiment_727_Organic Compound Functional Groups__1_2_0

    C. Solubility in 5% HCl. Into a small test tube, place 5 drops of known sample. If the compound is in solid form, add a pea-sized sample into the test tube. To each sample, add 10 drops of 5% HCl. Mix thoroughly by holding the top of the test tube and flicking the bottom with your finger. Record your observations.

  19. Sulfate and carbonate solubility of Groups 1 and 2

    In this experiment, students add drops of sulfate and carbonate solutions to Group 1 or 2 metal ions and see whether any precipitates form. They observe that no precipitates form in Group 1, indicating that Group 1 carbonates and sulfates are soluble, while the behaviour of Group 2 is more variable. The practical should take approximately 20 ...

  20. discussing solubity in the lab report

    discussing solubility in the lab report. Q1: When reporting solubilities, do I report if the compound was insoluble in different solvents, or just the ones that they were soluble in and what the positive result indicates about its identity? A1: Report whether the compound was soluble in d H 2 O, and in the aq. acids and aq bases. You should indicate what (if anything) the solubility tells you ...

  21. Experiment: Temperature and Solubility

    Compare the solubility in cold, room temperature, and hot water. Conclusions. Analyze the data to determine if the hypothesis holds true. The conclusion should discuss whether the solubility of the solute was greater in hot water compared to cold, and what this implies about the relationship between temperature and solubility in solutions.

  22. Concept OF Solubility AND Miscibility Lab Report

    CONCLUSION : In this experiment, the polar solvent will solute in polar solvent. It is same for both solubility and miscibility. The difference was only solubility was between solid substances and a liquid. For miscibility it was a mixture of two liquids and become a solution.

  23. Prediction of CO2 solubility in Ionic liquids for CO2 capture using

    Ionic liquids (ILs) are highly effective for capturing carbon dioxide (CO2). The prediction of CO2 solubility in ILs is crucial for optimizing CO2 capture processes. This study investigates the ...

  24. 1.8: Experiment 7

    Pb (NO₃)₂. Run. Reset. A precipitation reaction is a type of reaction when two solutions react to form an insoluble solid (ionic salt) A lot of ionic compounds dissolve in water and exist as individual ions. But when two ions find each other forming an insoluble compound, they suddenly combined and fall to the floor.